Anthropic reveals hackers are ‘weaponising’ AI to launch cyberattacks
Malicious actors are using agentic AI to craft deepfakes, job fraud and ransomware, lowering the barrier to complex cyberattacks and raising alarm among security professionals.
In its latest threat intelligence report, Anthropic has revealed that its AI tool Claude has been purposefully weaponised by hackers, offering a disturbing glimpse into how quickly AI is shifting the cyber threat landscape.
In one operation, termed ‘vibe hacking’, attackers used Claude Code to automate reconnaissance, ransomware creation, credential theft, and ransom-demand generation across 17 organisations, including those in healthcare, emergency services and government.
The firm also documents other troubling abuses: North Korean operatives used Claude to fabricate identities, successfully get hired at Fortune 500 companies and maintain access, all with minimal real-world technical skills. In another case, AI-generated ransomware variants were developed, marketed and sold to other criminals on the dark web.
Experts warn that such agentic AI systems enable single individuals to carry out complex cybercrime acts once reserved for well-trained groups.
While Anthropic has deactivated the compromised accounts and strengthened its safeguards, the incident highlights an urgent need for proactive risk management and regulation of AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!