A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.
Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.
Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.
Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.
The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.
That gap increases exposure, especially for critical systems and infrastructure.
Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.
Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.
The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
