UN Global Mechanism on ICT security discusses procedures, debates co-facilitator appointments

The United Nations Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible state behaviour in the use of ICTs held its third organisational meeting, focusing on operational arrangements for the newly established permanent forum.

The session, chaired by Ambassador Egriselda López of El Salvador, addressed decision-making procedures, meeting schedules for 2026, and the structure of two dedicated thematic groups (DTGs), which will complement plenary sessions.

Delegations discussed the mechanism’s working methods, with López noting that decisions would be taken by consensus in line with UN General Assembly rules of procedure.

A central point of discussion was the appointment of co-facilitators for the two DTGs, one focusing on ICT security challenges and the other on capacity development. López indicated that she intended to appoint co-facilitators, taking into account geographic balance.

Several delegations, including the Russian Federation, the Islamic Republic of Iran, China, and Belarus, said that such appointments should be agreed upon by consensus among member states. Other delegations, including the European Union, the United States, and Australia, expressed support for the Chair’s approach and emphasised the need to proceed with preparations for substantive work.

Delegations also addressed stakeholder participation, noting that non-governmental organisations, the private sector, and academia would contribute in a consultative manner, while decision-making would remain intergovernmental.

The provisional agenda for future substantive plenary sessions was discussed, with some delegations, including Iran and the Russian Federation, requesting adjustments to ensure alignment with the agreed mandate. Other delegations supported the structure proposed by the Chair, which is organised around the five pillars of the framework for responsible state behaviour in cyberspace.

The meeting concluded without agreement on the provisional agenda or the appointment of co-facilitators. The Chair said she would conduct informal consultations with member states to address outstanding issues ahead of the first substantive plenary session scheduled for July 2026.

The Global Mechanism is mandated to advance discussions on threats, norms and principles, the application of international law, confidence-building measures, and capacity development, as part of its role as a permanent UN forum on ICT security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN Global Mechanism on ICT security advances work, shifts focus to implementation

The United Nations Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible state behaviour in the use of ICTs held its second meeting, during which member states conducted a general exchange of views on the work of the newly established permanent forum.

The session, chaired by Ambassador Egriselda López of El Salvador, focused on agenda item four, during which 61 member states and three intergovernmental organisations delivered statements on priorities for the mechanism.

Delegations emphasised the transition from the previous Open-Ended Working Group (OEWG) to the new permanent mechanism, highlighting the need to build on existing agreements and move towards practical implementation. Several speakers stressed that the mechanism should focus on translating the agreed framework for responsible state behaviour in cyberspace into concrete outcomes, rather than negotiating new commitments.

Across statements, member states reaffirmed the five-pillar framework covering threats, norms and principles, the application of international law, confidence-building measures, and capacity development.

Capacity development was highlighted as a cross-cutting priority, particularly by developing countries and Small Island Developing States, which pointed to the need for demand-driven and sustainable approaches to strengthen cybersecurity capabilities. Delegations also noted challenges, including ransomware, threats to critical infrastructure, and the impact of emerging technologies such as AI.

Member states welcomed the establishment of two dedicated thematic groups, one addressing substantive ICT security challenges and another focused on capacity development, as a means to support more detailed discussions and implementation.

Several delegations reaffirmed that international law, including the UN Charter, applies to cyberspace and called for further work on its practical implementation. Many also emphasised the importance of maintaining a consensus-based, intergovernmental process, while enabling contributions from stakeholders, including the private sector, academia, and civil society, in line with agreed modalities.

The meeting forms part of the initial phase of the Global Mechanism’s work, following its establishment as a permanent UN forum on ICT security. The mechanism is expected to convene its first substantive plenary session in July 2026, alongside dedicated thematic group meetings scheduled for December 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New quantum threat could weaken cryptocurrency encryption systems

A new warning from Google says advances in quantum computing could weaken widely used cryptographic systems protecting cryptocurrencies and digital infrastructure. A new whitepaper suggests future quantum machines may need fewer resources than previously estimated to break elliptic curve cryptography.

The research focuses on the elliptic curve discrete logarithm problem, which underpins much of today’s blockchain security. Findings suggest quantum algorithms like Shor’s could run with fewer qubits and gates, increasing concerns about cryptographic resilience.

To address the risk, the paper recommends a transition to post-quantum cryptography, which is designed to resist quantum attacks. It also outlines short-term blockchain measures, including avoiding reuse of vulnerable wallet addresses and preparing digital asset migration strategies.

Google also introduced a responsible disclosure approach using zero-knowledge proofs to communicate vulnerabilities without exposing exploitable details.

The company says this balances transparency and security, supporting coordinated efforts across crypto and research communities to prepare for quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN launches Global Mechanism on ICT security, elects chair for 2026–2027

The United Nations has convened the organisational session of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible state behaviour in the use of ICTs, a new permanent forum established by UN General Assembly resolution 80/16.

The session was opened by Izumi Nakamitsu, Under-Secretary-General and High Representative for Disarmament Affairs, who facilitated the election of Ambassador Egriselda López of El Salvador as chair for the 2026–2027 biennium.

During the meeting, the Russian Federation said it would not block the consensus-based appointment of López to ensure the swift launch of the mechanism. However, it expressed ‘deep disquiet’ regarding the pre-election process, stating that the UN Office for Disarmament Affairs (UNODA) had initiated an informal silence procedure on 13 March regarding López’s candidacy without prior discussion with member states. The delegation described the step as ‘unauthorised’ under UN General Assembly resolutions 79/237 and 80/16.

In her remarks following the election, López emphasised that the mechanism should focus on implementation of existing commitments, stating the need to move from agreements to ‘concrete results.’ She underlined that the process remains intergovernmental and should be guided by consensus among member states.

The session adopted its provisional agenda and proceeded with a general exchange of views among delegations.

Several regional groups outlined priorities for the mechanism. Nigeria, speaking on behalf of the African Group, highlighted capacity development as a cross-cutting priority and pointed to cybersecurity threats affecting developing countries, including ransomware and attacks on critical infrastructure.

The Pacific Islands Forum, represented by the Solomon Islands, emphasised the vulnerabilities of Small Island Developing States and called for practical implementation of agreed measures.

The Arab Group and the European Union also stressed the importance of translating existing frameworks into action, with the EU highlighting the need to enhance implementation of the agreed framework for responsible state behaviour in cyberspace.

Across statements, delegations highlighted several common priorities, including:

  • strengthening capacity development efforts;
  • addressing ransomware and threats to critical infrastructure;
  • advancing the application of international law in cyberspace;
  • ensuring that the mechanism builds on the outcomes of the previous Open-Ended Working Group.

Member states also welcomed the establishment of two dedicated thematic groups, one focusing on substantive issues and another on capacity development, and called for clear mandates and coordination between them.

The Global Mechanism is mandated to advance discussions across five pillars:

  • threats
  • norms and principles
  • the application of international law
  • confidence-building measures
  • capacity development.

It will convene annual plenary sessions, thematic group meetings, and a review conference every five years, leading up to the 2030 review.

The organisational session marks the start of the mechanism’s substantive work as a permanent UN forum on ICT security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU privacy bodies back cybersecurity overhaul

The European Data Protection Board and the European Data Protection Supervisor have backed proposals to strengthen the EU cybersecurity law while safeguarding personal data. Their joint opinion addresses reforms to the Cybersecurity Act and updates to the NIS2 Directive.

Regulators support plans to reinforce the mandate of the European Union Agency for Cybersecurity and expand cybersecurity certification across digital supply chains. Clearer coordination between ENISA and privacy authorities is seen as essential for consistent oversight.

Advice also calls for limits on the processing of personal data and for prior consultation on technical rules affecting privacy. Certification schemes should align with the GDPR and help organisations demonstrate compliance.

Additional recommendations include broader cybersecurity skills training and a single EU entry point for personal data breach notifications. Proposed changes would also classify digital identity wallet providers as essential entities under the EU security rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake abuse crisis escalates worldwide

AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.

Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.

Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.

Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.

Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.

Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agent causes internal data leak at Meta

Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.

Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.

The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.

Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.

Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.

Analysts warn that such errors are likely to recur as AI adoption accelerates.

The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot