Frontier AI changes cyber risk calculations, New Zealand warns

New Zealand’s National Cyber Security Centre has warned that frontier AI models are likely to change the cyber threat landscape by increasing malicious actors’ ability to discover and exploit software vulnerabilities at greater speed and scale.

The guidance states that frontier AI models have already demonstrated the ability to identify vulnerabilities in software products. At the same time, it notes that defenders should consider where AI can support their own work, including checking in-house code for vulnerabilities and strengthening software before it is deployed into production.

Also, the guidance refers to a recent Anthropic report on Mythos Preview, which describes it as an agentic model capable of autonomously completing a series of tasks. According to the NCSC, Anthropic says the model can identify zero-day vulnerabilities in code and turn them into working exploits.

At the same time, the NCSC stresses that effective security controls remain the best line of defence as new vulnerabilities continue to be discovered. It recommends that organisations review their security posture to ensure it remains fit for purpose, and that appropriate methods to detect and contain malicious activity are in place across networks.

Senior leaders are urged to review how vulnerabilities are identified and managed, including patching, disclosure, supplier assurance, incident response, and protections for critical systems. For developers, the guidance recommends using frontier AI models cautiously in code reviews, patching frequently, reducing attack surfaces, applying defence-in-depth, and monitoring closely for signs of compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s National Cyber Security Centre launches device to protect display connections from cyber threats

The National Cyber Security Centre (NCSC) has developed SilentGlass, a device designed to protect display connections from malicious or unexpected activity. It is the first commercially available product licensed to use NCSC branding and was launched at CYBERUK.

SilentGlass blocks unauthorised interactions between HDMI and DisplayPort connections and screens. The NCSC stated that threat actors can target monitors as they may process sensitive or personal data.

The intellectual property has been licensed to Goldilock Labs, which is manufacturing the device in partnership with Sony UK Technology Centre. The product has already been deployed in government environments and approved for use in high-threat settings.

The NCSC noted that increasing numbers of connected devices raise exposure to risks linked to physical interfaces. SilentGlass has been developed to address this risk by preventing malicious connections at the hardware level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK National Cyber Security Centre recommends passkeys over passwords

The National Cyber Security Centre (NCSC) recommends the use of passkeys as a more secure alternative to passwords for accessing online services. The guidance supports wider adoption of passwordless authentication across digital platforms.

Passkeys are created and managed on user devices and do not need to be remembered. The NCSC noted that they are resistant to phishing, as they cannot be intercepted, reused or stolen in the same way as passwords.

The NCSC also stated that passkeys can be faster and more convenient to use. Authentication relies on existing device security methods, such as fingerprint, facial recognition or PIN, rather than separate login credentials.

Passkeys are stored and managed through credential managers, which can synchronise access across trusted devices and provide backups. The NCSC advised that where passkeys are not available, users should continue using strong passwords and enable two-step verification.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Microsoft commits A$25 billion to expand AI and cloud in Australia

Microsoft has announced its largest-ever investment in Australia, committing A$25 billion by the end of 2029 to expand AI and cloud infrastructure, strengthen cyber defence collaboration, and train three million Australians in AI skills by 2028.

The announcement was made alongside Australian Prime Minister Anthony Albanese during Microsoft chief executive Satya Nadella’s visit to Sydney. The company said the investment will expand Azure AI supercomputing and cloud capacity in Australia and increase its local cloud and AI infrastructure footprint by more than 140% by the end of 2029.

The announcement also includes collaboration with the Australian AI Safety Institute, an extension of the Microsoft-Australian Signals Directorate Cyber Shield to additional government agencies, and deeper work on national resilience with the Department of Home Affairs.

Albanese said:

We want to make sure all Australians benefit from AI. Our National AI Plan is all about capturing the economic opportunities of this transformative technology while protecting Australians from the risks.’ He added: ‘Microsoft’s long-term investment in our national capability will help deliver on that plan – strengthening our cyber defences and creating opportunity for Australian workers and businesses.’

Nadella added:

Australia has an enormous opportunity to translate AI into real economic growth and societal benefit.’ He added: ‘That is why we are making our largest investment in Australia to date, committing A$25 billion to expand AI and cloud capacity, strengthen cybersecurity, and expand access to digital skills across the country.

Microsoft said the investment is underpinned by a memorandum of understanding with the Australian Government, tied to national expectations for data center and AI infrastructure developers. It also said it will work with the Australian AI Safety Institute to monitor, test, and evaluate advanced AI systems, including human-AI interaction risks in companion chatbots and conversational AI systems.

Why does it matter?

The scale of the investment links infrastructure, skills, safety, and cyber resilience in a single package aligned with Australia’s AI Action Plan. It also signals that competition over AI capacity is increasingly tied not only to datacentres and compute, but to workforce readiness, regulatory cooperation, and national capability in areas such as cybersecurity and resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF data shows 63% of global child abuse content hosted in the EU

New data from the Internet Watch Foundation (IWF) points to a stark imbalance in global online child protection, with the EU member states hosting the majority of confirmed child sexual abuse material URLs identified by the organisation. In 2025, IWF analysts actioned 310,437 URLs, with 63% traced to hosting services in the EU member states.

A small cluster of countries, including Bulgaria and the Netherlands, accounted for a large share of that hosting concentration, highlighting structural vulnerabilities in hosting infrastructure and uneven enforcement across jurisdictions. The IWF notes that such concentrations often reflect a combination of high-volume sites, migration between hosting locations, and inconsistent takedown speeds.

These findings come shortly after the EU failed to preserve legal continuity for the temporary framework that had allowed companies to carry out certain voluntary detection measures while negotiations on a permanent child sexual abuse law continued. That lapse has intensified concerns about a widening gap between the scale of online abuse and the legal tools available to detect and disrupt it.

The IWF argues that fragmented regulation and uneven infrastructure responses make it easier for criminal content to persist online. Where abuse material remains concentrated on a few high-volume sites in jurisdictions with slower or less consistent takedown practices, it stays accessible for longer and is more likely to be copied, redistributed, or reposted elsewhere.

By contrast, takedown performance can vary sharply across jurisdictions. The UK accounted for just 951 actioned URLs in 2025, or 0.30% of the total, a figure the IWF links to a much stronger domestic removal framework and closer operational cooperation.

The broader message of the data is that child sexual abuse material cannot be tackled effectively through fragmented national responses alone. The IWF is using the figures to press for a more coherent international framework for detection, reporting, and removal, warning that without aligned rules and stronger accountability, systemic weaknesses in digital governance will continue to leave serious gaps in child protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

NCSC publishes new cross-domain architecture guidance

The UK’s National Cyber Security Centre has published new guidance on cross-domain architecture, outlining an updated framework for moving data safely between environments with differing security levels.

The guidance is intended to make cross-domain technology adoption simpler and more secure. In an accompanying blog post, the NCSC notes that such technologies have long been used in defence and intelligence settings, where organisations need to move data securely between systems operating at different security levels.

The NCSC links the revised guidance to a changing threat environment, including more capable and persistent attackers, greater exposure of critical national infrastructure, and risks associated with unknown vulnerabilities, supply chains, and AI-enabled discovery of weaknesses. It says the guidance should be used by organisations whose threat model assumes a targeted attack and where the consequences of compromise would be significant.

The new approach focuses on end-to-end architecture rather than fixed boundaries or specific technologies. It is intended to support business functions spanning systems with different levels of trust, including document import, video communications, and interactions with services hosted in other environments via APIs.

A central part of the guidance is a clear understanding of required data flows, system connections, and relevant threats. The NCSC describes cross-domain as a sequence of functions, often referred to as a pipeline, that builds confidence in data as it moves between trust zones.

The guidance largely replaces the organisation’s older security principles for new end-to-end architectures. However, those principles will remain part of its Principles-Based Assurance approach in the medium term. The blog also says the original import and export data design patterns are being deprecated and will, over time, be replaced by new cross-domain patterns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s National Cyber Security Centre chief warns of ‘perfect storm’ for UK cybersecurity

Dr Richard Horne, chief executive of the UK’s National Cyber Security Centre, has described the country as facing a ‘perfect storm’ for cybersecurity.

Speaking at the CYBERUK conference in Glasgow, Horne described developments in AI and wider international tensions as creating a period of ‘tumultuous uncertainty’. He added that the definition of cybersecurity is expanding as technology becomes more deeply embedded in robotics, autonomous systems, and human-integrated technologies.

Horne called for what he described as a ‘cultural shift’ across organisations, adding: ‘cybersecurity is the responsibility of everyone, whether they sit on the Board or the IT help desk… cybersecurity is part of their mission.’

He also argued: ‘organisations that do not focus on their technology base…as core to their prosperity … are no longer just naïve but are failing to grasp the reality of today’s world.’

On the threat landscape, Horne noted that incident numbers remain ‘fairly steady’, but that the source of attacks has shifted, with ‘the majority of the nationally significant incidents that the NCSC is handling now originate directly or indirectly from nation states.’

He also described cyberspace as part of the contested space ‘between peace and war’ and warned that the UK is seeing Russia apply lessons learned during its invasion of Ukraine beyond the battlefield. In that context, he argued that recent conflicts show ‘cyber operations are now integral to conflict’ and that ‘cybersecurity is the home front’.

Addressing frontier AI, Horne said: ‘Frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cybersecurity are still to be addressed.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!