UK’s National Cyber Security Centre chief warns of ‘perfect storm’ for UK cybersecurity

Dr Richard Horne, chief executive of the UK’s National Cyber Security Centre, has described the country as facing a ‘perfect storm’ for cybersecurity.

Speaking at the CYBERUK conference in Glasgow, Horne described developments in AI and wider international tensions as creating a period of ‘tumultuous uncertainty’. He added that the definition of cybersecurity is expanding as technology becomes more deeply embedded in robotics, autonomous systems, and human-integrated technologies.

Horne called for what he described as a ‘cultural shift’ across organisations, adding: ‘cybersecurity is the responsibility of everyone, whether they sit on the Board or the IT help desk… cybersecurity is part of their mission.’

He also argued: ‘organisations that do not focus on their technology base…as core to their prosperity … are no longer just naïve but are failing to grasp the reality of today’s world.’

On the threat landscape, Horne noted that incident numbers remain ‘fairly steady’, but that the source of attacks has shifted, with ‘the majority of the nationally significant incidents that the NCSC is handling now originate directly or indirectly from nation states.’

He also described cyberspace as part of the contested space ‘between peace and war’ and warned that the UK is seeing Russia apply lessons learned during its invasion of Ukraine beyond the battlefield. In that context, he argued that recent conflicts show ‘cyber operations are now integral to conflict’ and that ‘cybersecurity is the home front’.

Addressing frontier AI, Horne said: ‘Frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cybersecurity are still to be addressed.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Frontier AI cybersecurity risks highlighted by the World Economic Forum

A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.

Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.

Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.

Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.

The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.

That gap increases exposure, especially for critical systems and infrastructure.

Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.

Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.

The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK NCSC calls for stronger cyber readiness

The UK National Cyber Security Centre has warned that organisations must urgently prepare for severe cyber threats, describing them as a growing risk to operations and national resilience. The guidance calls for immediate action from leadership.

Cyber attacks are becoming more capable and disruptive, with new technologies such as AI increasing their speed and scale. These threats can lead to major operational, financial and security impacts.

The agency emphasises that resilience, rather than prevention alone, is critical. Organisations must be able to continue operating and recover during cyber attacks, with preparation and planning carried out in advance.

The Centre states that responsibility lies with organisational leaders, urging investment, coordination and early planning to ensure essential services can continue under pressure in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan prioritises cyber resilience

The Government of the Republic of Kazakhstan has reviewed cybersecurity measures for state bodies during an interagency meeting chaired by the Deputy Prime Minister and Minister of AI and Digital Development.

According to the Government, reports highlighted progress in cybersecurity policies alongside ongoing vulnerabilities. Audits of local executive bodies identified systemic weaknesses requiring stronger safeguards.

The meeting also introduced new measures, including mandatory biometric identification for operators managing large databases. Officials stressed the importance of integrating systems into a unified monitoring framework.

The Government stated that cybersecurity is essential for digital transformation and instructed agencies to improve oversight, public awareness and data protection efforts in Kazakhstan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada’s cyber resilience plan targets AI-driven threats to critical infrastructure

A new initiative to strengthen national resilience has been launched by the Canadian Centre for Cyber Security against escalating cyber threats targeting critical infrastructure.

The programme, titled CIREN (Critical Infrastructure Resilience and Escalated Threat Navigation), aims to prepare organisations for severe disruptions by improving readiness, response capacity, and long-term recovery planning.

An initiative that reflects growing concern within Communications Security Establishment Canada over increasingly sophisticated cyber risks, including those amplified by AI.

Authorities highlight that both state-sponsored and criminal actors are exploiting automation and AI to accelerate attacks, raising the stakes for sectors such as energy, telecommunications, transport, and water systems.

CIREN outlines a structured approach centred on operational continuity during extreme scenarios.

Organisations are encouraged to prepare for prolonged isolation of critical systems, develop independent operating capabilities, and establish recovery frameworks capable of rebuilding infrastructure after major incidents. The focus remains on maintaining essential services under worst-case conditions.

The programme forms part of a broader national strategy in Canada to enhance cyber readiness through collaboration, threat intelligence, and practical guidance.

Officials stress that proactive planning and simplified defensive measures can significantly reduce real-world impact, particularly as cyber incidents grow in frequency, scale, and complexity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Europol shut down illegal booter services across 21 countries

A major international crackdown led by Europol has targeted more than 75,000 users involved in distributed denial-of-service (DDoS)-for-hire activity. The coordinated Operation PowerOFF brought together 21 countries in a global effort to dismantle cyberattack infrastructure.

Authorities issued tens of thousands of warning messages, carried out arrests, executed search warrants, and seized dozens of domains linked to illegal booter platforms.

The operation also disrupted technical systems used to facilitate attacks, including servers and databases that enabled users to target online services and websites.

Analysis of seized data provided access to millions of user accounts, strengthening ongoing investigations across participating jurisdictions. Europol supported the operation through intelligence analysis, forensic work, and coordination between national agencies, helping identify and track those involved.

Alongside enforcement, the initiative has shifted towards prevention, including awareness campaigns, search engine interventions, and blockchain-based warnings.

Officials stress that DDoS-for-hire services remain widely accessible but are illegal, with users ranging from inexperienced actors to more organised cybercriminals driven by financial or ideological motives.

By targeting both infrastructure and users, authorities reduce the accessibility of tools that enable low-skill attackers to cause significant disruption to online services. Such actions strengthen cyber resilience and reflect a shift towards more proactive, internationally coordinated responses to digital threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s NCSC chief warns frontier AI will speed up cybersecurity threats

Dr Richard Horne, chief executive of the United Kingdom’s National Cyber Security Centre (NCSC), said advances in frontier AI models will make it easier, faster, and cheaper to find and exploit software vulnerabilities, increasing pressure on organisations to strengthen their security baseline.

In a piece published on the NCSC website, Horne said the longer-term effect of AI-assisted vulnerability discovery could be positive if technology suppliers use such tools to identify and fix weaknesses across the lifecycle of products and services. He also warned that the path to that outcome brings immediate risks and requires urgent action.

Horne said organisations that have not taken appropriate steps to safeguard their systems will increasingly be exposed as AI lowers the time, skill, and resources needed to identify exploitable weaknesses. He added that pressure to apply security patches quickly will become more acute as these capabilities develop.

Horne added that said organisations should follow established NCSC guidance, including reducing unnecessary exposure to attack, applying security updates rapidly, and monitoring for and responding quickly to malicious activity.

Horne also said these measures must be championed by leaders and boards, describing cyber risk as business risk. He added that government-backed schemes such as Cyber Essentials can help organisations and their customers gain confidence that core security practices are being followed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia advances draft AI regulation framework

Russia has moved forward with a draft law outlining the fundamentals of state regulation of AI technologies, with the public consultation closed on 15 April 2026. The proposal outlines a structured compliance framework to tighten oversight of AI system development and deployment nationwide.

Under the draft, AI system operators would be required to test their systems to identify potential uses that could violate Russian legislation.

The framework also introduces a classification of trusted AI models, which would be subject to formal security verification by authorised federal bodies responsible for technical intelligence countermeasures and information security.

The proposed rules also establish a certification process for quality compliance, to be carried out in accordance with procedures defined by the Russian government. These measures aim to create a multi-layered oversight system for AI security and performance in regulated environments.

The proposed framework signals a shift towards tighter state control over how AI is tested, classified, and deployed, particularly in sensitive or high-risk environments. By introducing mandatory testing, security certification and government-defined quality standards, it increases regulatory scrutiny across the AI lifecycle. 

The broader implication is a move towards more centralised governance of AI systems, where compliance and risk management become embedded requirements rather than optional best practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

AI reshapes cybersecurity access as defenders gain new tools

OpenAI has expanded its Trusted Access for Cyber programme and introduced a more permissive AI model designed specifically for cybersecurity work. The initiative reflects a broader shift in digital security, in which advanced AI tools are increasingly integrated into both defensive and offensive cyber operations.

The development highlights a structural change in cybersecurity, where defenders are no longer relying solely on traditional tools but are instead incorporating AI systems capable of analysing code, identifying vulnerabilities and accelerating incident response.

At the same time, the same technological capabilities are becoming accessible to malicious actors, intensifying the need for controlled and verified access.

New automated vulnerability tools are being deployed to detect and fix security flaws at scale, moving towards continuous AI-assisted protection. Rather than periodic security reviews, development environments are gradually shifting towards real-time monitoring and automated remediation.

The broader implication is a tightening link between AI capability growth and cyber risk management. Access frameworks based on identity verification and trust signals aim to balance the wider availability of defensive tools with safeguards against misuse.

The expansion of AI-driven cybersecurity tools reflects a structural shift in how digital infrastructure is protected at scale. As software systems become more complex and interconnected, traditional periodic security checks are increasingly insufficient to manage fast-evolving threats. 

Cybersecurity is moving towards an always-on, automated model where the balance between openness and restriction will directly shape global digital resilience. The outcome of this approach will influence how resilient digital infrastructure becomes as AI-driven threats and defences evolve in parallel.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!