AI model raises security risks, prompting release concerns, reports say
Reports of unauthorised access to Mythos AI intensify fears over the rapid development of powerful systems.
Anthropic is reported to have declined to release its latest AI model, Mythos, citing potential risks to global cybersecurity. The system is reported to be capable of identifying vulnerabilities across major operating systems and web browsers, raising concerns about possible misuse.
Reports indicate that the company is investigating claims that unauthorised actors may have accessed the model. A reported breach has intensified debate about whether technology firms can maintain control over increasingly powerful AI systems as development accelerates.
The Mythos model is described as part of a new class of AI tools capable of analysing complex digital environments and identifying weaknesses at scale. Such capabilities could support cybersecurity efforts, but may also present risks if exploited by malicious actors.
The case has contributed to discussions within the technology sector about balancing innovation with efforts to manage potential risks to digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
