ENISA to host 2026 telecom and digital infrastructure security forum

The European Union Agency for Cybersecurity (ENISA) has announced its Telecom and Digital Infrastructure Security Forum 2026, bringing together telecom experts, policymakers and national authorities to address emerging cybersecurity risks.

The forum will focus on challenges, including cyberattacks on telecom networks, resilience issues such as power dependencies, and the security implications of new technologies. It aims to support strategic and technical dialogue across the sector.

Organised with the Cyprus Presidency of the Council of the EU, the event provides a private setting for collaboration among industry specialists, regulators and the wider cybersecurity community, without public broadcasting.

Discussions will contribute to ongoing efforts to strengthen coordinated telecom security measures and policy development across the EU, with the event taking place in Nicosia, Cyprus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Peacebuilding and AI in focus at UNSSC webinar series

The United Nations System Staff College has highlighted growing interest across the UN and the wider peacebuilding community in how artificial intelligence is shaping conflict prevention, arguing that the technology can support peace efforts but cannot replace human judgement, diplomacy, and oversight.

The reflection draws on a three-part webinar series launched by UNSSC to examine AI governance, field use, and ethical risks in peacebuilding. According to the text, one message ran across all three discussions: AI may offer real value for conflict prevention, but its role should remain supportive rather than substitutive.

The piece argues that AI is already being used across the UN peace and security pillar and should be introduced only where it improves effectiveness, such as by handling repetitive tasks and allowing staff to focus on analysis, leadership, and political judgement. It also stresses that principles long associated with peacebuilding, including trust and ‘do no harm’, should apply across the full AI stack, from data and infrastructure to model design and deployment.

Examples cited from the webinar series include the use of augmented intelligence in early warning systems, where machine learning is combined with human contextual knowledge, and an AI-enabled WhatsApp chatbot used in Yemen to broaden participation in mediation, particularly among women and young people. The text presents these cases as evidence that AI can extend the reach of peacebuilding tools without replacing practitioners.

The final part of the reflection focuses on governance and ethics. It argues that while ethical AI principles are widely discussed, they need to be translated into practical, context-specific safeguards, especially in conflict settings. It also notes that risks differ across use cases such as early warning, social media monitoring, and mediation support, and says meaningful governance requires input from diplomats, researchers, mediators, and the private sector.

UNSSC says the webinar series drew between 300 and 500 registrants per session, which it presents as evidence of strong demand for more targeted learning on AI and peacebuilding. The college argues that its role should extend beyond convening discussion to turning those debates into practical knowledge for UN practitioners working at the intersection of AI and conflict prevention.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US military expands AI deployment across classified networks

The US Department of Defence has announced agreements with leading technology firms to deploy advanced AI capabilities across classified military networks. The initiative forms part of a broader effort to position the United States as a more AI-enabled military power.

Companies including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, and SpaceX are reported to be involved in supporting deployment within high-security Impact Level 6 and 7 environments. The integration is intended to improve data synthesis, situational awareness, and operational decision-making across defence systems.

The department’s internal platform, GenAI.mil, is also being presented as a central part of this push, with senior officials describing it as a way to put advanced AI tools into the hands of personnel across the department and across different classification levels.

Officials have emphasised that maintaining access to a range of AI providers is important to avoid vendor lock-in and preserve long-term flexibility. In that sense, the move reflects a wider attempt to strengthen national security through advanced technology while keeping the military AI stack diversified rather than dependent on a single company or model family. However, this is an inference based on the reported Pentagon framing of the agreements.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Swisscom says AI and geopolitics are reshaping the cyber threat landscape

Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations.

The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies.

On AI, Swisscom describes insecure AI use as a risk multiplier. While AI can improve productivity, the report warns that poor governance, weak visibility into models, and uncontrolled use of AI tools in operational environments can expand attack surfaces, affect data quality, and create new compliance challenges.

Software supply chains are also identified as a persistent vulnerability. Swisscom says a single compromised component or manipulated update process can have far-reaching consequences across interconnected systems, making software integrity, origin verification, and traceability increasingly important as mitigation measures.

The convergence of information technology and operational technology is presented as another growing area of concern. In sectors such as energy, healthcare, manufacturing, and building automation, incidents can have consequences that go well beyond financial loss, affecting critical infrastructure, production, and even human safety.

The report also places greater emphasis on digital sovereignty, arguing that organisations need clearer visibility over where data is processed, which legal regimes apply, and how dependent they are on cloud and technology providers. In that sense, Swisscom frames cybersecurity less as a narrow IT function and more as a strategic governance issue tied to resilience, control, and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Republic of Korea launch aviation partnership on technical cooperation and cyber resilience

European and South Korean aviation authorities are conducting a three-week series of technical exchanges in Seoul, covering safety oversight, airspace management, and cybersecurity.

The European Union Aviation Safety Agency (EASA) and South Korea’s Ministry of Land, Infrastructure and Transport are participating under the EU–Republic of Korea Aviation Partnership Project, an EU-funded initiative announced by the European External Action Service (EEAS).

The programme began with a three-day session on the International Civil Aviation Organisation’s Universal Safety Oversight Audit Programme (USOAP), which assesses national aviation safety oversight systems. EASA presented findings from its most recent ICAO audit, with discussions covering oversight frameworks, organisational structures, and lessons identified.

A workshop on performance-based navigation and airspace management followed, addressing procedures to improve the predictability and efficiency of aircraft arrivals, including at airports with parallel runways.

A third workshop on aviation cybersecurity is scheduled for the coming week. It will cover security considerations across aviation systems, including aircraft certification processes and air traffic management infrastructure.

The activities are designed to facilitate technical exchange between Korean and European stakeholders across the aviation sector, according to EASA.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s National Cyber Security Centre chief warns of ‘perfect storm’ for UK cybersecurity

Dr Richard Horne, chief executive of the UK’s National Cyber Security Centre, has described the country as facing a ‘perfect storm’ for cybersecurity.

Speaking at the CYBERUK conference in Glasgow, Horne described developments in AI and wider international tensions as creating a period of ‘tumultuous uncertainty’. He added that the definition of cybersecurity is expanding as technology becomes more deeply embedded in robotics, autonomous systems, and human-integrated technologies.

Horne called for what he described as a ‘cultural shift’ across organisations, adding: ‘cybersecurity is the responsibility of everyone, whether they sit on the Board or the IT help desk… cybersecurity is part of their mission.’

He also argued: ‘organisations that do not focus on their technology base…as core to their prosperity … are no longer just naïve but are failing to grasp the reality of today’s world.’

On the threat landscape, Horne noted that incident numbers remain ‘fairly steady’, but that the source of attacks has shifted, with ‘the majority of the nationally significant incidents that the NCSC is handling now originate directly or indirectly from nation states.’

He also described cyberspace as part of the contested space ‘between peace and war’ and warned that the UK is seeing Russia apply lessons learned during its invasion of Ukraine beyond the battlefield. In that context, he argued that recent conflicts show ‘cyber operations are now integral to conflict’ and that ‘cybersecurity is the home front’.

Addressing frontier AI, Horne said: ‘Frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cybersecurity are still to be addressed.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europol-backed operation shuts down thousands of dark web fraud sites

A global law enforcement operation supported by Europol has led to the shutdown of more than 373,000 dark web websites linked to fraudulent activity and the advertisement of child sexual abuse material.

The operation, known as ‘Operation Alice’, was launched on 9 March 2026 under the leadership of German authorities, with participation from 23 countries. The investigation, which began in 2021, initially targeted a dark web platform referred to as ‘Alice with Violence CP’.

According to Europol, investigators identified a single operator responsible for managing a network of hundreds of thousands of onion domains. These websites advertised child sexual abuse material and cybercrime-as-a-service offerings, including access to stolen financial data and systems.

Authorities state that the services were fraudulent, designed to extract payments without delivering the advertised material.

The operation has so far resulted in the identification of 440 customers worldwide, with further investigations ongoing against more than 100 individuals. Law enforcement agencies also seized 105 servers and multiple electronic devices during the coordinated action.

Europol provided analytical support, facilitated information exchange, and assisted in tracing cryptocurrency transactions linked to the network.

Authorities also reported that measures were taken throughout the investigation to identify and protect children at risk. An international arrest warrant has been issued for the suspected operator, who is reported to have generated significant profits through the scheme.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic dispute pushes Pentagon toward new AI providers

The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.

Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.

The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.

Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.

Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!