Frontier AI changes cyber risk calculations, New Zealand warns
New guidance says frontier AI creates new risks for organisations, but can also support defensive security work.
New Zealand’s National Cyber Security Centre has warned that frontier AI models are likely to change the cyber threat landscape by increasing malicious actors’ ability to discover and exploit software vulnerabilities at greater speed and scale.
The guidance states that frontier AI models have already demonstrated the ability to identify vulnerabilities in software products. At the same time, it notes that defenders should consider where AI can support their own work, including checking in-house code for vulnerabilities and strengthening software before it is deployed into production.
Also, the guidance refers to a recent Anthropic report on Mythos Preview, which describes it as an agentic model capable of autonomously completing a series of tasks. According to the NCSC, Anthropic says the model can identify zero-day vulnerabilities in code and turn them into working exploits.
At the same time, the NCSC stresses that effective security controls remain the best line of defence as new vulnerabilities continue to be discovered. It recommends that organisations review their security posture to ensure it remains fit for purpose, and that appropriate methods to detect and contain malicious activity are in place across networks.
Senior leaders are urged to review how vulnerabilities are identified and managed, including patching, disclosure, supplier assurance, incident response, and protections for critical systems. For developers, the guidance recommends using frontier AI models cautiously in code reviews, patching frequently, reducing attack surfaces, applying defence-in-depth, and monitoring closely for signs of compromise.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
