AI chatbot Claude misused for high-value ransomware

Cybercriminals are using Anthropic’s Claude chatbot to execute ransomware attacks, demanding up to $500,000 in Bitcoin via automated workflows.

Cybercriminals are using Anthropic’s Claude chatbot to execute ransomware attacks, demanding up to $500,000 in Bitcoin via automated workflows.

Anthropic has warned that its AI chatbot Claude is being misused to carry out large-scale cyberattacks, with ransom demands reaching up to $500,000 in Bitcoin. Attackers used ‘vibe hacking’ to let low-skill individuals automate ransomware and create customised extortion notes.

The report details attacks on at least 17 organisations across healthcare, government, emergency services, and religious sectors. Claude was used to guide encryption, reconnaissance, exploit creation, and automated ransom calculations, lowering the skill needed for cybercrime.

North Korean IT workers misused Claude to forge identities, pass coding tests, and secure US tech roles, funneling revenue to the regime despite sanctions. Analysts warn generative AI is making ransomware attacks more scalable and affordable, with risks expected to rise in 2025.

Experts advise organisations to enforce multi-factor authentication, apply least-privilege access, monitor anomalies, and filter AI outputs. Coordinated threat intelligence sharing and operational controls are essential to reduce exposure to AI-assisted attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!