Ransomware 3.0 raises alarm over AI-generated cyber threats

A lab prototype sparked alarm after being mistaken for live malware, showing how real AI ransomware might look.

Study shows AI can map systems, encrypt data, and write ransom notes, raising concerns about future cybercrime threats.

Researchers at NYU’s Tandon School of Engineering have demonstrated how large language models can be utilised to execute ransomware campaigns autonomously. Their prototype, dubbed Ransomware 3.0, simulated every stage of an attack, from intrusion to the generation of a ransom note.

The system briefly raised an alarm after cybersecurity firm ESET discovered its files on VirusTotal, mistakenly identifying them as live malware. The proof-of-concept was designed only for controlled laboratory use and posed no risk outside testing environments.

Instead of pre-written code, the prototype embedded text instructions that triggered AI models to generate tailored attack scripts. Each execution created unique code, evading traditional detection methods and running across Windows, Linux, and Raspberry Pi systems.

The researchers found that the system identified up to 96% of sensitive files and could generate personalised extortion notes, raising psychological pressure on victims. With costs as low as $0.70 per attack using commercial AI services, such methods could lower barriers for criminals.

The team stressed that the work was conducted ethically and aims to help defenders prepare countermeasures. They recommend monitoring file access patterns, limiting outbound AI connections, and developing defences against AI-generated attack behaviours.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!