Google warns adversaries are industrialising AI-enabled cyberattacks
AI-enabled malware and exploit development are expanding, Google Threat Intelligence Group findings show.
Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.
In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.
The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.
Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.
Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.
AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.
Why does it matter?
Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
