AI misuse shifts from productivity boost to active operations, Google warns

Researchers at Google warn of a shift from productivity hacks to embedded models, with malware querying AI for on-demand obfuscation.

An experimental VBScript dropper dubbed PROMPTFLUX asks Gemini for ‘just-in-time’ evasion, enabling self-modifying code that dodges static detection.

Google says criminals now embed AI into their operations, not just to speed things up. A VBScript dropper, ‘PROMPTFLUX’, queries Gemini for just-in-time obfuscation and evasion, enabling self-modification to bypass static detection.

PROMPTFLUX’s ‘Thinking Robot’ component uses a hard-coded API key to request new VBScript code, then saves regenerated variants for persistence; samples surfaced on VirusTotal suggest the family is still in testing.

Google also flags other proofs-of-concept: ‘FruitShell’, a PowerShell reverse shell with prompts tuned to dodge AI-powered analysis, and ‘PromptSteal’, which hits the Hugging Face API to generate one-line Windows data-collection commands.

Underground markets are maturing with purpose-built AI tools, lowering barriers for less-skilled actors, while social-engineering-style prompts try to elicit restricted outputs from chatbots. Google’s wider reporting notes experimentation tied to state-sponsored groups alongside criminal use.

Defenders should monitor for live LLM API calls, VBScript-related network patterns, and frequent code regeneration; Google has disabled accounts linked to observed tests but warns that operational misuse will likely grow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!