A new World Economic Forum (WEF) analysis argues that coordination failures across global technology supply chains could slow the transition towards quantum-safe cybersecurity, despite growing pressure from governments, regulators, and major technology companies to accelerate adoption of post-quantum cryptography (PQC).
The article highlights how the migration towards quantum-safe security has shifted from long-term planning into active deployment after the National Institute of Standards and Technology finalised its first PQC standards in 2024. The UK’s National Cyber Security Centre has already set phased migration targets extending to 2035, while Google has set 2029 as the target timeline for parts of its own transition roadmap.
Furthermore, WEF argues that post-quantum migration cannot be treated as a routine software update because quantum-safe security depends on every layer of the digital ecosystem. Semiconductors, firmware, operating systems, applications, cloud services, telecoms infrastructure, and critical national infrastructure all need coordinated upgrades. Delays at one stage of the supply chain could affect every downstream deployment.
Critical infrastructure operators face particular pressure because many systems rely on long operational cycles, globally sourced equipment, and tightly regulated procurement frameworks. Energy networks, telecoms systems, transport infrastructure, and financial institutions are already making procurement decisions that may shape cybersecurity resilience for decades.
According to the report, deploying infrastructure without a clear PQC migration pathway could create substantial future remediation costs and operational risks.
The piece also links the post-quantum transition to broader cyber resilience concerns tied to AI. Frontier AI systems are increasingly being used to identify vulnerabilities at scale, accelerating both defensive security testing and potential offensive cyber capabilities.
The article references Anthropic and its Claude Mythos model, along with examples of Mozilla Firefox vulnerability discovery, as evidence that AI is rapidly changing software assurance and implementation testing.
Organisations treating PQC migration as a coordinated resilience programme instead of a narrow compliance exercise will be better positioned to protect critical services, economic stability, and trust in digital systems over the coming decade.
Why does it matter?
Quantum computing is steadily moving from theoretical risk to practical cybersecurity challenge, forcing governments and industries to rethink the foundations of digital security. The WEF analysis shows that the greatest obstacle may not be the cryptographic technology itself, but the coordination required across suppliers, infrastructure operators, regulators, cloud providers, and hardware manufacturers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.
Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.
Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.
The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Council of the European Union has extended restrictive measures against individuals and entities involved in cyber-attacks threatening the EU and its member states until 18 May 2027. The legal framework behind the sanctions regime had already been extended until 18 May 2028.
The framework allows the EU to impose targeted sanctions on persons or entities involved in significant cyber-attacks that constitute an external threat to the Union or its member states. Measures can also be imposed in response to cyber-attacks against third countries or international organisations, where they support Common Foreign and Security Policy objectives.
Current listings under the regime apply to 19 individuals and seven entities. Sanctioned actors face asset freezes, while the EU citizens and companies are prohibited from making funds or economic resources available to them. Listed individuals are also subject to travel bans preventing them from entering or transiting through the EU territory.
The Council said the individual listings will continue to be reviewed every 12 months. It also said the measures are intended to deter malicious cyber activity and uphold the international rules-based order by ensuring accountability for those responsible.
The sanctions mechanism forms part of the EU’s broader cyber diplomacy toolbox, established in 2017 to strengthen coordinated diplomatic responses to malicious cyber activity. The Council said the EU and its member states would continue working with international partners to promote an open, free, stable and secure cyberspace.
Why does it matter?
The decision shows how cybersecurity has become part of the EU’s foreign policy and sanctions toolkit, not only a matter of technical defence. By extending cyber sanctions listings, the EU is reinforcing its use of diplomatic and economic measures to deter malicious cyber activity, attribute responsibility and signal that significant cyber-attacks can carry geopolitical consequences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK has brought into force regulations requiring the Information Commissioner to prepare a code of practice on the processing of personal data in relation to AI and automated decision-making.
The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and came into force on 12 May. The regulations apply across England and Wales, Scotland and Northern Ireland.
Under the regulations, the Information Commissioner must prepare a code giving guidance on good practice in the processing of personal data under the UK GDPR and the Data Protection Act 2018 when developing and using AI and automated decision-making systems.
The code must also include guidance on good practice in the processing of children’s personal data. Automated decision-making is defined by reference to provisions in the UK GDPR and the Data Protection Act 2018 inserted through the Data (Use and Access) Act 2025.
The instrument also modifies the panel requirements for preparing or amending the code. Any panel established to consider the code must not consider or report on aspects relating to national security.
The explanatory note states that no full impact assessment was prepared for the instrument because the regulations themselves are not expected to have a significant impact on the private, voluntary or public sectors. The Information Commissioner must produce an impact assessment when preparing the code.
Why does it matter?
The regulations move UK guidance on AI, automated decision-making and personal data onto a statutory track. The eventual code could become an important reference point for organisations using AI systems that process personal data, particularly where automated decisions or children’s data are involved. For now, the main development is procedural: the Information Commissioner is required to prepare the code, while the practical compliance details will follow through that process.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s New South Wales state has clarified that creating, sharing, or threatening to share sexually explicit images, videos, or audio of a person without consent is a criminal offence, including where the material has been digitally altered or generated using AI.
The state government strengthened protections in 2025 by amending the Crimes Act 1900 to cover digitally generated deepfakes. The law already applied to sexually explicit image material, but now also covers content created or altered by AI to place someone in a sexual situation they were never in.
The reforms mean that non-consensual sexual images or audio are covered regardless of how they were made. Threatening to create or share such material is also a criminal offence in New South Wales, with penalties of up to three years in prison, a fine of up to A$11,000, or both.
Courts can also order offenders to remove or delete the material. Failure to comply with such an order can result in up to 2 years’ imprisonment, a fine of up to A$5,500, or both.
The law operates alongside existing child abuse material offences. Under criminal law, any material depicting a person under 18 in a sexually explicit way can be treated as child abuse material, including AI-generated content.
Criminal proceedings against people under 16 can begin only with the approval of the Director of Public Prosecutions, which is intended to ensure that only the most serious matters involving young people enter the criminal justice system.
Limited exemptions apply for proper purposes, including genuine medical, scientific, law enforcement, or legal proceedings-related purposes. A review of the law will take place 12 months after it comes into effect to assess how it is working and whether changes are needed.
The changes are intended to address the misuse of AI and deepfake technology to harass, shame, or exploit people through fake digital content. New South Wales says its criminal law works alongside national online safety frameworks, including the work of Australia’s eSafety Commissioner, as It seeks to keep privacy and consent protections aligned with emerging technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
With the rapid expansion of AI technologies, agentic AI is rapidly moving from experimentation to deployment on a scale larger than ever before. As a result, these systems have been given far greater autonomy to perform tasks with limited human input, much to the delight of enterprise magnates.
Companies such as Microsoft, Google, Anthropic, and OpenAI are increasingly developing agentic AI systems capable of automating vulnerability detection, incident response, code analysis, and other security tasks traditionally handled by human teams.
The appeal of using agentic AI as a first line of defence is palpable, as cybersecurity teams face mounting pressure from the growing volume of attacks. According to the Microsoft Digital Defense Report 2025, the company now detects more than 600 million cyberattacks daily, ranging from ransomware and phishing campaigns to identity attacks. Additionally, the International Monetary Fund has also warned that cyber incidents have more than doubled since the COVID-19 pandemic, potentially triggering institutional failures and incurring enormous financial losses.
To add insult to injury, ransomware groups such as Conti, LockBit, and Salt Typhoon have shown increased activity from 2024 through early 2026, targeting critical infrastructure and global communications, as if aware of the upcoming cybersecurity fortifications and using a limited window of time to incur as much damage as possible.
In such circumstances, fully embracing agentic AI may seem like an ideal answer to the cybersecurity challenges looming on the horizon. Systems capable of autonomously detecting threats, analysing vulnerabilities, and accelerating response times could significantly strengthen cyber resilience.
Yet the same autonomy that makes these systems attractive to defenders could also be exploited by malicious actors. If agentic AI becomes a defining feature of cyber defence, policymakers and companies may soon face a more difficult question: how can they maximise its benefits without creating an entirely new layer of cyber risk?
Why cybersecurity is turning to agentic AI
The growing interest in agentic AI is not simply driven by the rise in cyber threats. It is also a response to the operational limitations of modern security teams, which are often overwhelmed by repetitive tasks that consume time and resources.
Security analysts routinely handle phishing alerts, identity verification requests, vulnerability assessments, patch management, and incident prioritisation — processes that can become difficult to manage at scale. Many of these tasks require speed rather than strategic decision-making, creating a natural opening for AI systems to operate with greater autonomy.
Microsoft has aggressively moved into this space. In March 2025, the company introduced Security Copilot agents designed to autonomously handle phishing triage, data security investigations, and identity management. Rather than replacing human analysts, Microsoft positioned the tools to reduce repetitive workloads and enable security teams to focus on more complex threats.
Google has approached the issue through vulnerability research. Through Project Naptime, the company demonstrated how AI systems could replicate parts of the workflow traditionally handled by human security researchers by identifying vulnerabilities, testing hypotheses, and reproducing findings.
Anthropic introduced another layer of complexity through Claude Mythos, a model built for high-risk cybersecurity tasks. While the company presented the model as a controlled release for defensive purposes, the announcement also highlighted how advanced cyber capabilities are becoming increasingly embedded in frontier AI systems.
Meanwhile, OpenAI has expanded partnerships with cybersecurity organisations and broadened access to specialised tools for defenders, signalling that major AI firms increasingly view cybersecurity as one of the most commercially viable applications for autonomous systems.
Together, these developments show that agentic AI is gradually becoming embedded in the cybersecurity infrastructure. For many companies, the question is no longer whether autonomous systems can support cyber defence, but how much responsibility they should be given.
When agentic AI tools become offensive weapons
The same capabilities that make agentic AI valuable to defenders also make it attractive to malicious actors. Systems designed to identify vulnerabilities, analyse code, automate workflows, and accelerate decision-making can be repurposed for offensive cyber operations.
Anthropic offered one of the clearest examples of that risk when it disclosed that malicious actors had used Claude in cyber campaigns. The company said attackers were not simply using the model for basic assistance, but were integrating it into broader operational workflows. The incident showed how agentic AI can move cyber misuse beyond advice and into execution.
The risk extends beyond large-scale cyber operations. Agentic AI systems could make phishing campaigns more scalable, automate reconnaissance, accelerate vulnerability discovery, and reduce the technical expertise needed to launch certain attacks. Tasks that once required specialist teams could become easier to coordinate through autonomous systems.
Security researchers have repeatedly warned that generative AI is already making social engineering more convincing through realistic phishing emails, cloned voices, and synthetic identities. More autonomous systems could further push those risks by combining content generation with independent action.
The concern is not that agentic AI will replace human hackers. Cybercrime could become faster, cheaper, and more scalable, mirroring the same efficiencies that organisations hope to achieve through AI-powered defence.
The agentic AI governance gap
The governance challenge surrounding agentic AI is no longer theoretical. As autonomous systems gain access to internal networks, cloud infrastructure, code repositories, and sensitive datasets, companies and regulators are being forced to confront risks that existing cybersecurity frameworks were not designed to manage.
Policymakers are starting to respond. In February 2026, the US National Institute of Standards and Technology (NIST) launched its AI Agent Standards Initiative, focused on identity verification and authentication frameworks for AI agents operating across digital environments. The aim is simple but important: organisations need to know which agents can be trusted, what they are allowed to do, and how their actions can be traced.
Governments are also becoming more cautious about deployment risks. In May 2026, the Cybersecurity and Infrastructure Security Agency (CISA) joined cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom in issuing guidance on the secure adoption of agentic AI services. The warning was clear: autonomous systems become more dangerous when they are connected to sensitive infrastructure, external tools, and internal permissions.
The private sector is adjusting as well. Companies are increasingly discussing safeguards such as restricted permissions, audit logs, human approval checkpoints, and sandboxed environments to limit the degree of autonomy granted to AI agents.
The questions facing businesses are becoming practical. Should an AI agent be allowed to patch vulnerabilities without approval? Can it disable accounts, quarantine systems, or modify infrastructure independently? Who is held accountable when an autonomous system makes the wrong decision?
Agentic AI may become one of cybersecurity’s most effective defensive tools. Its success, however, will depend on whether governance frameworks evolve quickly enough to keep pace with the technology itself.
How companies are building guardrails around agentic AI
As concerns around autonomous cyber systems grow, companies are increasingly experimenting with safeguards designed to prevent agentic AI from becoming an uncontrolled risk. Rather than granting unrestricted access, many organisations are limiting what AI agents can see, what systems they can interact with, and what actions they can execute without human approval.
Anthropic has restricted access to Claude Mythos over concerns about offensive misuse, while OpenAI has recently expanded its Trusted Access for Cyber programme to provide vetted defenders with broader access to advanced cyber tools. Both approaches reflect a growing consensus that powerful cyber capabilities may require tiered access rather than unrestricted deployment.
The broader industry is moving in a similar direction. CrowdStrike has increasingly integrated AI-driven automation into threat intelligence and incident response workflows while maintaining human oversight for critical decisions. Palo Alto Networks has also expanded its AI-powered security automation tools designed to reduce response times without fully removing human analysts from the decision-making process.
Cloud providers are also becoming more cautious about autonomous access. Amazon Web Services, Google Cloud, and Microsoft Azure have increasingly emphasised zero-trust security models, role-based permissions, and segmented access controls as enterprises deploy more automated tools across sensitive infrastructure.
Meanwhile, sectors such as finance, healthcare, and critical infrastructure remain particularly cautious about fully autonomous deployment due to the potential consequences of false positives, accidental shutdowns, or disruptions to essential services.
As a result, security teams are increasingly discussing safeguards such as audit logs, sandboxed environments, role-based permissions, staged deployments, and human approval checkpoints to balance speed with accountability. For now, many companies seem ready to embrace agentic AI, but without keeping one hand on the emergency brake.
The future of cybersecurity may be agentic
Agentic AI is unlikely to remain a niche experiment for long. The scale of modern cyber threats, combined with the mounting pressure on security teams, means organisations will continue to look for faster and more scalable defensive tools.
That shift could significantly improve cybersecurity resilience. Autonomous systems may help organisations detect threats earlier, reduce response times, address workforce shortages, and manage the growing volume of attacks that human teams increasingly struggle to handle alone.
At the same time, the technology’s long-term success will depend as much on restraint as on innovation. Without clear governance frameworks, operational safeguards, and human oversight, the same tools designed to strengthen cyber defence could introduce entirely new vulnerabilities.
The future of cybersecurity may increasingly belong to agentic AI. Whether that future becomes safer or more volatile may depend on how responsibly governments, companies, and security teams manage the transition.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Immigration, Refugees and Citizenship Canada has released its first AI Strategy, outlining how the department plans to use AI across immigration, citizenship, refugee, passport and settlement services while maintaining human oversight, privacy protection and accountability.
The strategy aligns with Canada’s AI Strategy for the Federal Public Service 2025-2027 and frames AI as a tool to improve service delivery, reduce administrative burdens, strengthen programme integrity and respond to fraud and cybersecurity threats. IRCC says its approach is based on responsible adoption, governance, workforce readiness, transparency and public engagement.
The department says it has used advanced analytics and machine learning since 2018 to support application triage, workload distribution and risk detection. It says machine learning can help identify straightforward, low-risk files for expedited officer review, while outcomes remain subject to officer verification.
IRCC states that it does not use autonomous AI agents or intelligent automation systems that can refuse client applications. It says systems that learn and adapt independently are generally unsuitable for administrative decision-making because their logic can be difficult to explain or reproduce.
The strategy identifies several areas of interest, including client service, fraud detection, document anomaly detection, settlement support, data analysis, accessibility and internal knowledge management. IRCC is also experimenting with AI tools for tasks such as document fraud detection, anomaly detection and support for administrative processes.
Privacy is presented as a central guardrail. IRCC says AI systems must use only the minimum personal information necessary for specific, justified purposes, and must include privacy assessments, mitigation measures, testing, auditing and Canadian-controlled environments for sensitive information. The department also says it will avoid black-box AI models for application decisions and keep AI systems explainable, supervised, secure and regularly tested.
The strategy sets five implementation priorities: establishing an AI Centre of Expertise, strengthening governance, building an AI-ready workforce, accelerating experimentation and developing an engagement strategy with employees, clients, vulnerable groups and partner organisations. IRCC describes the strategy as a living document that will evolve with domestic and international AI policy developments.
Why does it matter?
Immigration decisions can have life-changing consequences, making AI use in this field especially sensitive. IRCC’s strategy shows how governments are trying to use AI to improve efficiency and detect risks while drawing limits around autonomous decision-making, black-box models and the handling of personal information. The real test will be whether safeguards around human oversight, explainability, privacy and bias are strong enough as AI becomes more embedded in public administration.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK National Cyber Security Centre has warned organisations not to rush into using AI models to find software vulnerabilities without first considering security, legal, operational, and resourcing risks.
In guidance signed by Ruth C, Head of Vulnerability Management Group at the NCSC, the agency says organisations may feel pressure to use new AI models for vulnerability discovery, but should first ask what they are trying to achieve and whether AI is the best way to improve security.
The NCSC stresses that finding vulnerabilities does not automatically improve an organisation’s security and could make it worse if teams lack a process to manage, prioritise, and fix the issues that AI tools identify. It says basic cyber hygiene, including patching known vulnerabilities and controlling unauthorised access, is still more important for most organisations than focusing on zero-days.
The guidance also urges organisations to prioritise exploitable vulnerabilities rather than simply counting how many issues have been found. It notes that more than 40,000 vulnerabilities were assigned CVEs in 2025, while CISA’s Known Exploited Vulnerabilities catalogue tracked about 400 newly exploited vulnerabilities and around 40 that were zero-days when first exploited.
The NCSC highlights several risks associated with using AI for vulnerability discovery, including information leakage, infrastructure security, sandboxing, production-environment access, permissions granted to large language models, data retention policies, and legal compliance. It also advises organisations using hosted models to consider the physical location and legal jurisdictions that apply to them.
The guidance recommends starting with the external attack surface and verifying results through both AI and human review. It says keeping pace with frontier AI cyber developments will almost certainly be critical to cyber resilience over the next decade, but adds that organisations should invest in people as well as tools, stating that AI models accelerate the skills of cybersecurity staff rather than replacing them.
The NCSC also says organisations should understand how everything they develop or use is patched, with good asset management and dependency management described as crucial foundations for cyber resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK cybersecurity sector generated £14.7 billion in annual revenue and £9.1 billion in gross value added, according to the government’s Cyber Security Sectoral Analysis 2026.
The report, commissioned by the Department for Science, Innovation and Technology and produced by Ipsos and Perspective Economics, identifies 2,603 firms active in the UK cybersecurity market. That marks a 20% increase from the previous report, which identified 2,165 firms.
Employment in the sector reached about 69,600 full-time equivalent roles, an increase of around 2,300 jobs, or 3%, over the past year. The report says this is the lowest recorded employment growth rate since the series began in 2018, suggesting a softening in workforce growth.
Revenue rose by around 11% from last year’s estimate of £13.2 billion, while gross value added increased by 17%. The report also estimates GVA per employee at £131,200, up from £116,200, suggesting higher productivity within the cybersecurity ecosystem.
The analysis also points to growth in AI security and software security. It estimates that 111 firms active and registered in the UK now clearly offer cybersecurity for AI systems as an explicit product or service, up 68% from the previous baseline. Of those, 32 are specialist providers focused mainly or exclusively on AI security, while 79 offer AI security as part of a broader portfolio.
Software security is also expanding across the market. The report estimates that 1,141 firms provide software security services, an increase of 181 firms, or 19%, from the previous baseline. Nearly half of all UK cybersecurity providers appear to be involved in software security provision, with application security, cloud and container security, secure development, supply chain security, and DevSecOps highlighted as key areas.
Investment remains more subdued. Dedicated cybersecurity firms raised £184 million across 47 deals in 2025, down 11% from £206 million across 59 deals in 2024. The report says investors highlighted AI security and post-quantum cryptography as key themes, while also noting procurement barriers and limited UK growth-stage capital as ongoing concerns.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.
Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.
Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.
The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.
Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.
The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.
Why does it matter?
The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!