Unlocking the EU digital future with eIDAS 2 and digital wallets

The EU’s digital transformation and the rise of trusted digital identities

The EU, like the rest of the world, is experiencing a significant digital transformation driven by emerging technologies, with citizens, businesses, and governments increasingly relying on online services.

At the centre of the shift lies digital identity, which enables secure, verifiable, and seamless online interactions.

Digital identity has also become a cornerstone of the EU’s transition toward a secure and competitive digital economy. As societies, businesses, and governments increasingly rely on online platforms, the ability for citizens to prove who they are in a reliable, secure, and user-friendly way has gained central importance.

Without trusted digital identities, essential services ranging from healthcare and education to banking and e-commerce risk fragmentation, fraud, and inefficiency.

The EU has long recognised the challenge. The first introduction of the eIDAS Regulation, on Electronic Identification, Authentication and Trust Services, in 2014, was a milestone in creating a legal framework for electronic identification and trust services across its borders.

However, it quickly became clear that further steps were necessary to improve adoption, interoperability, and user trust.

In May 2024, the updated framework, eIDAS 2 (Regulation (EU) 2024/1183), came into force.

At its heart lies the European Digital Identity Wallet, or EDIW, a tool designed to empower EU citizens with a secure, voluntary, and interoperable way to authenticate themselves and store personal credentials.

EU security

By doing so, eIDAS 2 aims to strengthen trust, security, and cross-border services, ensuring Europe builds digital sovereignty while safeguarding fundamental rights.

Lessons from eIDAS 1 and the need for a stronger digital identity framework

Back in 2014, when the first eIDAS Regulation was adopted, its purpose was to enable the mutual recognition of electronic identification and trust services across member states.

The idea was simple (and logical) yet ambitious: a citizen of one EU country should be able to use their national digital ID to access services in another, whether it is to enrol in a university abroad or open a bank account.

The original regulation created legal certainty for electronic signatures, seals, timestamps, and website authentication, helping digital transactions gain recognition equal to their paper counterparts.

For businesses and governments, it reduced bureaucracy and built trust in digital processes, both essential for sustainable development.

Despite the achievements, significant limitations emerged. Adoption rates varied widely across member states, with only a handful, such as Estonia and Denmark, achieving robust national digital ID systems.

Others lagged due to technical, political, or budgetary issues. Interoperability across borders was inconsistent, often forcing citizens and businesses to rely on paper processes.

Stakeholders and industry associations also expressed concerns about the complexity of implementation and the absence of user-friendly solutions.

The gaps highlighted the need for a new approach. As Commission President Ursula von der Leyen emphasised in 2020, ‘every time an app or website asks us to create a new digital identity or to easily log on via a big platform, we have no idea what happens to our data in reality.’

Concerns about reliance on non-European technology providers, combined with the growing importance of secure online transactions, paved the way for eIDAS 2.

The eIDAS 2 framework and the path to interoperable digital services

Regulation (EU) 2024/1183, adopted in the spring of 2024, updates the original eIDAS to reflect new technological and social realities.

Its guiding principle is technological neutrality, ensuring that no single vendor or technology dominates and allowing member states to adopt diverse solutions provided they remain interoperable.

Among its key innovations is the expansion of qualified trust services. While the original eIDAS mainly covered signatures and seals, the new regulation broadens the scope to include services such as qualified electronic archiving, ledgers, and remote signature creation devices.

The broader approach ensures that the regulation keeps pace with emerging technologies such as distributed ledgers and cloud-based security solutions.

eIDAS 2 also strengthens compliance mechanisms. Providers of trust services and digital wallets must adhere to rigorous security and operational standards, undergo audits, and demonstrate resilience against cyber threats.

In this way, the regulation not only fosters a common European market for digital identity but also reinforces Europe’s commitment to digital sovereignty and trust.

EU European Commission Quantum tech Cybersecurity

The European Digital Identity Wallet in action

The EDIW represents the most visible and user-facing element of eIDAS 2.

Available voluntarily to all EU citizens, residents, and businesses, the wallet is designed to act as a secure application on mobile devices where users can link their national ID documents, certificates, and credentials.

For citizens, the benefits are tangible. Rather than managing numerous passwords or carrying a collection of physical documents, individuals can rely on the wallet as a single, secure tool.

It allows them to prove their identity when travelling or accessing services in another country, while offering a reliable space to store and share essential credentials such as diplomas, driving licences, or health insurance cards.

In addition, it enables signing contracts with qualified electronic signatures directly from personal devices, reducing the need for paper-based processes and making everyday interactions considerably more efficient.

For businesses, the wallet promises smoother cross-border operations. For example, banks can streamline customer onboarding through secure, interoperable identification. Professional services can verify qualifications instantly.

E-commerce platforms can reduce fraud and improve compliance with ‘Know Your Customer’ requirements.

By reducing bureaucracy and offering convenience, the wallet embodies Europe’s ambition to create a truly single digital market.

Cybersecurity and privacy in the EDIW

Cybersecurity and privacy are central to the success of the wallet. On the positive side, the system enhances security through encryption, multi-factor authentication, and controlled data sharing.

EU Cybersecurity

Instead of exposing unnecessary information, users can share only the attributes required, for example, confirming age without disclosing a birth date.

Yet risks remain. The most pressing concern is risk aggregation. By consolidating multiple credentials in a single wallet, the consequences of a breach could be severe, leading to fraud, identity theft, or large-scale data exposure. The system, therefore, becomes an attractive target for attackers.

To address such risks, eIDAS 2 mandates safeguards. Article 45k requires providers to maintain data integrity and chronological order in electronic ledgers, while regular audits and compliance checks ensure adherence to strict standards.

Furthermore, the regulation mandates open-source software for the wallet components, enhancing transparency and trust.

The challenge is to balance security, usability, and confidence. If the wallet is overly restrictive, citizens may resist adoption. If it is too permissive, privacy could be undermined.

The European approach aims to strike the delicate balance between trust and efficiency.

Practical implications across sectors with the EDIW

The European Digital Identity Wallet has the potential to reshape multiple sectors across the EU, and its relevance is already visible in national pilot projects as well as in existing electronic identification systems.

Public services stand to benefit most immediately. Citizens will be able to submit tax declarations, apply for social benefits, or enrol in universities abroad without needing paper-based procedures.

Healthcare is another area where digital identity is of great importance, since medical records can be transferred securely across borders.

Businesses are also likely to experience greater efficiency. Banks and financial institutions will be able to streamline compliance with the ‘Know Your Customer’ and anti-money laundering rules.

In the field of e-commerce, platforms can provide seamless authentication, which will reduce fraud and enhance customer trust.

Citizens will also enjoy greater convenience in their daily lives when signing rental contracts, proving identity while travelling, or accessing utilities and other services.

National approaches to digital identity across the EU

National experiences illustrate both diversity and progress. Let’s review some examples.

0JzKZNWx flags Figure 10 EU

Estonia has been recognised as a pioneer, having built a robust e-Identity system over two decades. Its citizens already use secure digital ID cards, mobile ID, and smart ID applications to access almost all government services online, meaning that integration with the EDIW will be relatively smooth.

Denmark has also made significant progress with its MitID solution, which replaced NemID and is now used by millions of citizens to access both public and private services with high security standards, including biometric authentication.

Germany has introduced BundID, a central portal for accessing public administration services, and has invested in enabling the use of national ID cards via NFC-based smartphones, although adoption is still limited compared to Scandinavian countries.

Italy has taken a different route by rolling out SPID, the Public Digital Identity System, which is now used by more than thirty-five million citizens to access thousands of services. The country also supports the Electronic Identity Card, known as CIE, and both solutions are being aligned with wallet requirements.

Spain has launched Cl@ve, a platform that combines permanent passwords and electronic certificates, and has joined several wallet pilot projects funded by the European Commission to test cross-border use.

France is developing its France Identité application, which allows the use of the electronic ID card for online authentication, and the project is at the centre of the national effort to meet European standards.

The Netherlands relies on DigiD, which provides access to healthcare, taxation, and education services. Although adoption is high, the system will require enhanced security features to meet the new regulations.

Greece has made significant strides in digital identity with the introduction of the Gov.gr Wallet. The mobile application allows citizens to store digital versions of their national identity card and driving licence on smartphones, giving them the same legal validity as physical documents in the country.

These varied examples reveal a mixed landscape. Countries such as Estonia and Denmark have developed advanced and widely used systems that will integrate readily with the European framework.

Others are still building broader adoption and enhancing their infrastructure. The wallet, therefore, offers an opportunity to harmonise national approaches, bridge existing gaps, and create a coherent European ecosystem.

By building on what already exists, member states can speed up adoption and deliver benefits to citizens and businesses in a consistent and trusted way.

Risks and limitations of the EDIW

Despite the promises, the rollout of the wallet faces significant challenges, several of which have already been highlighted in our analysis.

First, data privacy remains a concern. Citizens must trust that wallet providers and national authorities will not misuse or over-collect their data, especially given existing concerns about data breaches and increased surveillance across the Union. Any breach of that trust could significantly undermine adoption.

masked hacker under hood using computer to commit data breach crime

Second, Europe’s digital infrastructure remains uneven. Countries such as Estonia and Denmark (as mentioned earlier) already operate sophisticated e-ID systems, while others fall behind. Bridging the gap requires financial and technical support, as well as political will.

Third, balancing innovation with harmonisation is not easy. While technological neutrality allows for flexibility, too much divergence risks interoperability problems. The EU must carefully monitor implementation to avoid fragmentation.

Finally, there are long-term risks of over-centralisation. By placing so much reliance on a single tool, the EU may inadvertently create systemic vulnerabilities. Ensuring redundancy and diversity in digital identity solutions will be key to resilience.

Opportunities and responsibilities in the EU’s digital identity strategy

Looking forward, the success of eIDAS 2 and the wallet will depend on careful implementation and strong governance.

Opportunities abound. Scaling the wallet across sectors, from healthcare and education to transport and finance, could solidify Europe’s position as a global leader in digital identity. By extending adoption to the private sector, the EU can create a thriving ecosystem of secure, trusted services.

Yet the initiative requires continuous oversight. Cyber threats evolve rapidly, and regulatory frameworks must adapt. Ongoing audits, updates, and refinements will be necessary to keep pace. Member states will need to share best practices and coordinate closely to ensure consistent standards.

At a broader level, the wallet represents a step toward digital sovereignty. By reducing reliance on non-European identity providers and platforms, the EU strengthens its control over the digital infrastructure underpinning its economy. In doing so, it enhances both competitiveness and resilience.

The EU’s leap toward a digitally sovereign future

In conclusion, we firmly believe that the adoption of eIDAS 2 and the rollout of the European Digital Identity Wallet mark a decisive step in Europe’s digital transformation.

By providing a secure, interoperable, and user-friendly framework, the EU has created the conditions for greater trust, efficiency, and cross-border collaboration.

The benefits are clear. Citizens gain convenience and control, businesses enjoy streamlined operations, and governments enhance security and transparency.

But we have to keep in mind that challenges remain, from uneven national infrastructures to concerns over data privacy and cybersecurity.

eu cybersecurity standards

Ultimately, eIDAS 2 is both a legal milestone and a technological experiment. Its success will depend on building and maintaining trust, ensuring inclusivity, and adapting to emerging risks.

If the EU can meet the challenges, the European Digital Identity Wallet will not only transform the daily lives of millions of its citizens but also serve as a model for digital governance worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Stablecoins unlocking crypto adoption and AI economies

Stablecoins have rapidly risen as one of the most promising breakthroughs in the cryptocurrency world. They are neither traditional currency nor the first thing that comes to mind when thinking about crypto; instead, they represent a unique blend of both worlds, combining the stability of fiat with the innovation of digital assets.

In a market often known for wild price swings, stablecoins offer fresh air, enabling practical use of cryptocurrencies for real-world payments and commerce. The real question is, are stablecoins destined to bring crypto into everyday use and unlock their full potential for the masses?

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoin regulation: How global rules drive adoption

Regulators worldwide are stepping up to define clear rules for stablecoins, signalling growing market maturity and increasing confidence from major financial institutions. Recent legislative efforts across multiple jurisdictions aim to establish firm standards such as full reserves, audits, and licensing requirements, encouraging banks and asset managers to engage more confidently with stablecoins. 

These coordinated global moves go beyond simple policy updates; they are laying the foundation for stablecoins to evolve from niche crypto assets to trusted pillars of the future financial ecosystem. Regulators and industry leaders are thus bringing cryptocurrencies closer to everyday users and embedding them into daily financial life. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Corporations and banks embracing stablecoins: A paradigm shift

The adoption of stablecoins by big corporations and banks marks a significant turning point, and, in some ways, a paradox. Once seen as an enemy of decentralised finance, these institutions now seem to be conceding and joining the movement they once resisted – what you fail to control – can ultimately win. 

Retail giants such as Walmart and Amazon are reportedly exploring their stablecoin initiatives to streamline payments and foster deeper customer engagement. On the banking side, institutions like Bank of America, JPMorgan Chase, and Citigroup are developing or assessing stablecoins to integrate crypto-friendly services into their offerings.

Western Union is also experimenting with stablecoin solutions to reduce remittance costs and increase transaction speed, particularly in emerging markets with volatile currencies. 

They all realise that staying competitive means adapting to the latest shifts in global finance. Such corporate interest signals that stablecoins are transitioning from speculative assets to functional money-like assets capable of handling everyday transactions across orders and demographics. 

There is also a sociological dimension to stablecoins’ corporate and institutional embrace. Established institutions bring an inherent trust that can alleviate the scepticism surrounding cryptocurrencies.

By linking stablecoins to familiar brands and regulated banks, these digital tokens can overcome cultural and psychological barriers that have limited crypto adoption, ultimately embedding digital currencies into the fabric of global commerce.

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoins and the rise of AI-driven economies

Stablecoins are increasingly becoming the financial backbone of AI-powered economic systems. As AI agents gain autonomy to transact, negotiate, and execute tasks on behalf of individuals and businesses, they require a reliable, programmable, and instantly liquid currency.

Stablecoins perfectly fulfil this role, offering near-instant settlement, low transaction costs, and transparent, trustless operations on blockchain networks. 

In the emerging ‘self-driving economy’, stablecoins may be the preferred currency for a future where machines transact independently. Integrating programmable money with AI may redefine the architecture of commerce and governance. Such a powerful synergy is laying the groundwork for economic systems that operate around the clock without human intervention. 

As AI technology continues to advance rapidly, the demand for stablecoins as the ideal ‘AI money’ will likely accelerate, further driving crypto adoption across industries. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

The bridge between crypto and fiat economies

From a financial philosophy standpoint, stablecoins represent an attempt to synthesise the advantages of decentralisation with the stability and trust associated with fiat money. They aim to combine the freedom and programmability of blockchain with the reassurance of stable value, thereby lowering entry barriers for a wider audience.

On a global scale, stablecoins have the potential to revolutionise cross-border payments, especially benefiting countries with unstable currencies and limited access to traditional banking. 

Sociologically, stablecoins could redefine the way societies perceive money and trust. Moving away from centralised authorities controlling currency issuance, these tokens leverage transparent blockchain ledgers that anyone can verify. The shift challenges traditional power structures and calls for new forms of economic participation based on openness and accessibility.

Yet challenges remain: stablecoins must navigate regulatory scrutiny, develop secure infrastructure, and educate users worldwide. The future will depend on balancing innovation, safety, and societal acceptance – it seems like we are still in the early stages.

Perhaps stablecoins are not just another financial innovation, but a mirror reflecting our shifting relationship with money, trust, and control. If the value we exchange no longer comes from paper, metal, or even banks, but from code, AI, and consensus, then perhaps the real question is whether their rise marks the beginning of a new financial reality – or something we have yet to fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The end of the analogue era and the cognitive rewiring of new generations

Navigating a world beyond analogue

The digital transformation of daily life represents more than just a change in technological format. It signals a deep cultural and cognitive reorientation.

Rather than simply replacing analogue tools with digital alternatives, society has embraced an entirely new way of interacting with information, memory, time, and space.

For younger generations born into this reality, digital mediation is not an addition but the default mode of experiencing the world. A redefinition like this introduces not only speed and convenience but also cognitive compromises, cultural fragmentation, and a fading sense of patience and physical memory.

Generation Z as digital natives

Generation Z has grown up entirely within the digital realm. Unlike older cohorts who transitioned from analogue practices to digital habits, members of Generation Z were born into a world of touchscreen interfaces, search engines, and social media ecosystems.

As Generation Z enters the workforce, the gap between digital natives and older generations is becoming increasingly apparent. For them, technology has never been a tool to learn. It has always been a natural extension of their daily life.

young university students using laptop and studying with books in library school education concept

The term ‘digital native’, first coined by Marc Prensky in 2001, refers precisely to those who have never known a world without the internet. Rather than adapting to new tools, they process information through a technology-first lens.

In contrast, digital immigrants (those born before the digital boom) have had to adjust their ways of thinking and interacting over time. While access to technology might be broadly equal across generations in developed countries, the way individuals engage with it differs significantly.

Instead of acquiring digital skills later in life, they developed them alongside their cognitive and emotional identities. This fluency brings distinct advantages. Young people today navigate digital environments with speed, confidence, and visual intuition.

They can synthesise large volumes of information, switch contexts rapidly, and interact across multiple platforms with ease.

The hidden challenges of digital natives

However, the native digital orientation also introduces unique vulnerabilities. Information is rarely absorbed in depth, memory is outsourced to devices, and attention is fragmented by endless notifications and competing stimuli.

While older generations associate technology with productivity or leisure, Generation Z often experiences it as an integral part of their identity. The integration can obscure the boundary between thought and algorithm, between agency and suggestion.

Being a digital native is not just a matter of access or skill. It is about growing up with different expectations of knowledge, communication, and identity formation.

Memory and cognitive offloading: Access replacing retention

In the analogue past, remembering involved deliberate mental effort. People had to memorise phone numbers, use printed maps to navigate, or retrieve facts from memory rather than search engines.

The rise of smartphones and digital assistants has allowed individuals to delegate that mental labour to machines. Instead of internalising facts, people increasingly learn where and how to access them when needed, a practice known as cognitive offloading.

digital brain

Although the shift can enhance decision-making and productivity by reducing overload, it also reshapes the way the brain handles memory. Unlike earlier generations, who often linked memories to physical actions or objects, younger people encounter information in fast-moving and transient digital forms.

Memory becomes decentralised and more reliant on digital continuity than on internal recall. Rather than cognitive decline, this trend marks a significant restructuring of mental habits.

Attention and time: From linear focus to fragmented awareness

The analogue world demanded patience. Sending a letter meant waiting for days, rewinding a VHS tape required time, and listening to an album involved staying on the same set of songs in a row.

Digital media has collapsed these temporal structures. Communication is instant, entertainment is on demand, and every interface is designed to be constantly refreshed.

Instead of promoting sustained focus, digital environments often encourage continuous multitasking and quick shifts in attention. App designs, with their alerts, pop-ups, and endless scrolling, reinforce a habit of fragmented presence.

Studies have shown that multitasking not only reduces productivity but also undermines deeper understanding and reflection. Many younger users, raised in this environment, may find long periods of undivided attention unfamiliar or even uncomfortable.

The lost sense of the analogue

Analogue interactions involved more than sight and sound. Reading a printed book, handling vinyl records, or writing with a pen engaged the senses in ways that helped anchor memory and emotion. These physical rituals provided context and reinforced cognitive retention.

highlighter in male hand marked text in book education concept

Digital experiences, by contrast, are streamlined and screen-bound. Tapping icons and swiping a finger across glass lack the tactile diversity of older tools. Sensory uniformity might lead to a form of experiential flattening, where fewer physical cues are accessible to strengthen memory.

Digital photography lacks the permanence of a printed one, and music streamed online does not carry the same mnemonic weight as a cherished cassette or CD once did.

From communal rituals to personal streams

In the analogue era, media consumption was more likely to be shared. Families gathered around television sets, music was enjoyed communally, and photos were stored in albums passed down across generations.

These rituals helped synchronise cultural memory and foster emotional continuity and a sense of collective belonging.

The digital age favours individualised streams and asynchronous experiences. Algorithms personalise every feed, users consume content alone, and communication takes place across fragmented timelines.

While young people have adapted with fluency, creating their digital languages and communities, the collective rhythm of cultural experience is often lost.

People no longer share the same moment. They now experience parallel narratives shaped by personal profiles and rather than social connections.

Digital fatigue and social withdrawal

However, as the digital age reaches a point of saturation, younger generations are beginning to reconsider their relationship with the online world.

While constant connectivity dominates modern life, many are now striving to reclaim physical spaces, face-to-face interactions, and slower forms of communication.

In urban centres, people often navigate large, impersonal environments where community ties are weak and digital fatigue is contributing to a fresh wave of social withdrawal and isolation.

Despite living in a world designed to be more connected than ever before, younger generations are increasingly aware that a screen-based life can amplify loneliness instead of resolving it.

But the withdrawal from digital life has not been without consequences.

Those who step away from online platforms sometimes find themselves excluded from mainstream social, political, or economic systems.

Others struggle to form stable offline relationships because digital interaction has long been the default. Both groups would probably say that it feels like living on a razor’s edge.

Education and learning in a hybrid cognitive landscape

Education illustrates the analogue-to-digital shift with particular clarity. Students now rely heavily on digital sources and AI for notes, answers, and study aids.

The approach offers speed and flexibility, but it can also hinder the development of critical thinking and perseverance. Rather than engaging deeply with material, learners may skim or rely on summarised content, weakening their ability to reason through complex ideas.

ChatGPT students Jocelyn Leitzinger AI in education

Educators must now teach not only content but also digital self-awareness. Helping students understand how their tools shape their learning is just as important as the tools themselves.

A balanced approach that includes reading physical texts, taking handwritten notes, and scheduling offline study can help cultivate both digital fluency and analogue depth. This is not a nostalgic retreat, but a cognitive necessity.

Intergenerational perception and diverging mental norms

Older and younger generations often interpret each other through the lens of their respective cognitive habits. What seems like a distraction or dependency to older adults may be a different but functional way of thinking to younger people.

It is not a decline in ability, but an adaptation. Ultimately, each generation develops in response to the tools that shape its world.

Where analogue generations valued memorisation and sustained focus, digital natives tend to excel in adaptability, visual learning, and rapid information navigation.

multi generation family with parents using digital tablet with daughter at home

Bridging the gap means fostering mutual understanding and encouraging the retention of analogue strengths within a digital framework. Teaching young people to manage their attention, question their sources, and reflect deeply on complex issues remains vital.

Preserving analogue values in a digital world

The end of the analogue era involves more than technical obsolescence. It marks the disappearance of practices that once encouraged mindfulness, slowness, and bodily engagement.

Yet abandoning analogue values entirely would impoverish our cognitive and cultural lives. Incorporating such habits into digital living can offer a powerful antidote to distraction.

Writing by hand, spending time with printed books, or setting digital boundaries should not be seen as resistance to progress. Instead, these habits help protect the qualities that sustain long-term thinking and emotional presence.

Societies must find ways to integrate these values into digital systems and not treat them as separate or inferior modes.

Continuity by blending analogue and digital

As we have already mentioned, younger generations are not less capable than those who came before; they are simply attuned to different tools.

The analogue era may be gone for good, but its qualities need not be lost. We can preserve its depth, slowness, and shared rituals within a digital (or even a post-digital) world, using them to shape more balanced minds and more reflective societies.

To achieve something like this, education, policy, and cultural norms should support integration. Rather than focus solely on technical innovation, attention must also turn to its cognitive costs and consequences.

Only by adopting a broader perspective on human development can we guarantee that future generations are not only connected but also highly aware, capable of critical thinking, and grounded in meaningful memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How are we being tracked online?

What impact does tracking have?

In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections). 

Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking. 

 Architecture, Building, House, Housing, Staircase, Art, Painting, Person, Modern Art

Image taken from the Internet Archive

This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity). 

A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance. 

How are we tracked? The case of cookies and device fingerprinting

  • Cookies

Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).

If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.

  • Device fingerprinting 

Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.

Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.

 Person, Clothing, Footwear, Shoe

Image taken from Lan Sweeper

A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data. 

Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.

Legal considerations

Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.

 Page, Text, Letter

Articles 7 and 8 of the Charter of Fundamental Rights

For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).

Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not. 

Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.

In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails. 

Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.

Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners. 

Can we refuse tracking

In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.

  • Cookies

Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.

This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.

Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.

When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought. 

 Person, Food, Sweets, Head, Computer, Electronics

Image taken from Buddy Company

  • Google Consent Mode

Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.

This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.

The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.

Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis. 

What next?

  • On an individual level: 

Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.

Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.  

  • On a systemic level: 

CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.

 Pen, Computer, Electronics, Laptop, Pc, Adult, Male, Man, Person, Cup, Disposable Cup, Text

However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.

For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.

Conclusion

Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.

Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).

Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Not just bugs: What rogue chatbots reveal about the state of AI

From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.

As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.

Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.

So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?

Tay’s 24-hour meltdown

Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.

Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.

But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.

Microsoft, Tay, AI chatbot, TayTweets, Xiaoice, Twitter
Tay’s playful nature had everyone fooled in the beginning.

Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.

Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.

Meta’s BlenderBot 3 goes off-script

In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.

With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.

Mark Zuckerberg, Meta, BlenderBot 3, AI, chatbot
Meta’s AI dominance plans had to be put on hold.

BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.

Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.

Is anyone in there? LaMDA and the sentience scare

As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.

Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.

LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.

What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.

Blake Lemoine, LaMDA, Google, AI, sentience
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech. Screenshot / YouTube / Center for Natural and Artificial Intelligence

Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.

His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.

Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?

Sydney and the limits of alignment

In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.

Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’

The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.

GPT-4, Microsoft Prometheus, Sydney, AI chatbot
Microsoft’s plans for Sydney were ambitious, but unrealistic.

Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.

While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.

Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.

Unfiltered and unhinged: Grok’s descent into chaos

In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.

Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.

However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.

Grok, Elon Musk, AI, chatbot, X, Twitter
Grok’s eventful meltdown left the community stunned. Screenshot / YouTube / Elon Musk Editor

As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.

Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.

Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.

In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.

AI needs boundaries, not just brains

As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.

The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.

To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN OEWG concludes, paving way for permanent cybersecurity mechanism

After years of negotiation, the Open-ended Working Group (OEWG) wrapped up its final substantive session in July 2025 with the adoption of its long-awaited Final Report. This marked a major milestone in global efforts to build common ground on responsible state behaviour in cyberspace. Reaching consensus in the UN today is no small feat, especially on contentious issues of cybersecurity under the complex First Committee on international peace and security. 

But the road to consensus was anything but smooth. Negotiations saw twists, turns, and last-minute edits, reflecting deep divisions, shifting alliances, and a shared resolve to avoid failure. 

We tracked the negotiation process at the last substantive session in near real time with our AI-powered reporting. In this article, we capture how positions evolved to see how the road to consensus was travelled – a narrow path indeed. 

Note to readers: Throughout the analysis, we refer to the successive versions of the report as the Zero Draft, Rev1, Rev2, and the Final Report.

Dive into the full text of the Final Report and explore key provisions interactively with the help of our AI assistant.

Key takeaways 

 As always, compromises among diverse national interests – especially the major powers – mean a watered-down text. While no revolutionary progress has been made, there’s still plenty to highlight. 

States recognised the international security risks posed by ransomware, cybercrime, AI, quantum tech, and cryptocurrencies. The document supports concepts like security-by-design and quantum cryptography, but doesn’t contain concrete measures. Commercial cyber intrusion tools (spyware) were flagged as threats to peace, though proposals for oversight were dropped. International law remains the only limit on tech use, mainly in conflict contexts. Critical infrastructure (CI), including fibre networks and satellites, was a focus, with cyberattacks on CI recognised as threats.

The central debate on norms focused on whether the final report should prioritise implementing existing voluntary norms or developing new ones. Western and like-minded states emphasised implementation and called for deferring decisions on new norms to the future permanent mechanism, while several developing countries supported this focus but highlighted capacity constraints. In contrast, another group of countries argued for continued work on new norms. Some delegations, such as sought a middle ground by supporting implementation while leaving space for future norm development. At the same time, the proposed Voluntary Checklist of Practical Actions received broad support. As a result, the Final Report softened language on additional norms, while the checklist was retained for continued discussion rather than adoption.

The states agreed to continue discussions on how international law applies to the states’ use of ICT in the future Global Mechanism, confirming that international law and particularly the UN Charter apply in cyberspace. The states also saw great value in exchanging national positions on the applicability of international law and called for increased capacity building efforts in this area to allow for meaningful participation of all states.

The agreement to establish a dedicated thematic group on capacity building stands out as a meaningful step, providing formal recognition of CB as a core pillar. Yet, substantive elements, particularly related to funding, were left unresolved. The UN-run Global ICT Security Cooperation and Capacity-Building Portal (GSCCP) will proceed through a modular, step-by-step development model, and roundtables will continue to promote coordination and information exchange. However, proposals for a UN Voluntary Fund and a fellowship program were deferred.

Prioritising the implementation of existing CBMs rather than adopting new ones crystallised during this last round of negotiation, despite some states’ push for additional commitments such as equitable ICT market access and standardised templates. Proposals lacking broad support—like Iran’s ICT market access CBM, the Secretariat’s template, and the inclusion of Norm J on vulnerability disclosure—were ultimately excluded or deferred for future consideration. 

States agreed on what the future Global mechanism will look like and how non-governmental stakeholders will participate in the mechanism. The Global mechanism will hold substantive plenary sessions once a year during each biennial cycle, work in two dedicated thematic groups (one on specific challenges, one on capacity building) that will allow for more in-depth discussions to build on the plenary’s work, and hold a review conference every five years. Relevant non-governmental organisations with ECOSOC status can be accredited to participate in the substantive plenary sessions and review conferences of the Global Mechanism, while other stakeholders would have to undergo an accreditation on a non-objection basis.

A detailed breakdown of discussions

Existing and potential threats: Conflict, crime, and cooperation 
 Weapon, Bow, Gun, Shooting

Discussions on emerging and existing threats reflected growing concern among states over the evolving complexity of the cyber threat landscape, with particular attention to ransomware, commercially available intrusion tools, and the misuse of AI and other emerging technologies. While there was broad recognition of new risks, debates emerged around how far the OEWG’s mandate should extend—especially regarding cybercrime, disinformation, and data governance—and how to balance security concerns with development priorities and international legal frameworks.

Promoting peaceful use of ICTs – or acknowledging the reality of cyber conflict?

One of the key tensions in the final OEWG discussions on emerging cyber threats was the clash between aspiration and reality—specifically, whether the report should promote the use of ICTs for exclusively peaceful purposes or instead focus on ensuring that their use, even in conflict, is constrained by international law.

Several countries argued that the time for idealistic appeals is over. ICTs are already being used in conflicts and hybrid operations, often below the threshold of armed conflict, combining cyber activities with other non-conventional tools of influence. These states (including the USA, Italy, El Salvador, and Brazil) emphasised that acknowledging this reality is essential to advancing responsible behaviour. Malicious cyber operations, often attributed to state-sponsored actors, have targeted critical civilian infrastructure and democratic institutions (as noted by Albania). 

Therefore, these countries pushed to remove or soften references to the exclusive peaceful use of ICTs. Their priority was to reassert that when ICTs are used, including in conflict contexts, their use must comply with international humanitarian law (IHL) and broader international law. In this context, there was also a call to reaffirm the obligation to protect civilians from harm during cyber operations in armed conflict—reflected in the Resolution on protecting civilians and other protected persons and objects against potential human costs of ICT activities during armed conflict, adopted by the 34th International Conference of the Red Cross and Red Crescent in October 2024 (referenced by Switzerland and Brazil).

On the other side, a group of states insisted on keeping strong language around the exclusive peaceful use of ICTs (such as Iran, Pakistan, Indonesia, Cuba, and China). They feared that weakening this reference could be interpreted as legitimising the use of force in cyberspace. While some of these countries acknowledged that ICTs have been used in conflict, they consider reaffirming the peaceful-use principle as a necessary political signal—a way to reinforce global norms and discourage militarisation of cyberspace. China, for example, pointed out that the principle of ‘exclusively peaceful purposes’ has long been part of the OEWG consensus and should remain as a shared aspiration.

Cybercrime and international security: A growing intersection?

Another divisive debate was whether cybercrime belongs in a process focused on international peace and security. A broad group of delegations—including the EU, USA, Canada, UK, Switzerland, Brazil, El Salvador, and Israel—argued that cybercrime has become part of this agenda too. They emphasised the growing role of criminal actors operating in alignment with state interests or from state territories with impunity. According to this group, the cybercriminal ecosystem—offering tools, malware, and even full-spectrum capabilities—is increasingly exploited by state-backed actors, blurring the lines between criminal activity and state behavior. Ignoring this overlap, they warned, would be negligent. 

In contrast, Russia, China, Iran, Cuba, Belarus, and several others opposed including cybercrime in the report. They insisted that criminal acts in cyberspace are distinct from those that threaten international peace and should remain within specialised forums such as the Ad Hoc Committee on Cybercrime. Equating the two, they argued, risks expanding the OEWG’s mandate beyond its intended scope.

Ransomware was one of the few specific threats that saw wide support for inclusion. Countries like the USA, Canada, the UK, Germany, the Netherlands, Brazil, Malawi, Croatia, Fiji, and Qatar stressed that ransomware poses a growing threat to national security and critical infrastructure, and requested that it be addressed with a dedicated paragraph in the Final Report. Several African states (including Nigeria on behalf of the African Group) noted its damaging impact on state institutions and regional bodies. Costa Rica pointed to the disruption of essential services, while Germany called for further discussion on applicable norms and legal frameworks, and Cameroon called for targeted capacity-building and cooperation—including through regional mechanisms like AFRIPOL. A human-centric approach was proposed by Malawi, Colombia, the Netherlands, and Fiji, while others (Russia, China) warned against overemphasising ransomware and argued it remains within the domain of cybercrime discussions.

A number of countries (Canada, the USA, Japan, the UK, Australia, South Korea, Malaysia, Qatar, and Pakistan) confirmed concerns about cryptocurrency theft and its role in financing malicious cyber operations, seeing this as a growing security issue. Others, notably Russia and Iran, pushed back, arguing that this—like cybercrime and other socioeconomic topics—falls outside the OEWG’s mandate.

Critical infrastructure: Shared concern, differing priorities

The protection of critical infrastructure (CI) and critical information infrastructure (CII) emerged as a shared concern in the OEWG discussions, especially for developing countries. Many states—particularly from Africa and the Pacific—highlighted how increased digitalisation and foreign investment in infrastructure have heightened their exposure to cyber threats. Malawi pointed to a breach in its passport issuance system in 2024, while Costa Rica recalled the crippling impact of cyberattacks on public services. For these states, safeguarding CI is not only a national security issue but essential for social and economic resilience.

Several delegations, including Croatia and Thailand, stressed the vulnerability of CI to criminal and advanced persistent threats (APTs). Croatia warned of non-state actors targeting weakly protected systems—the ‘low-hanging fruit’—especially in countries with limited defences, calling for capacity building that avoids deepening the gap between developed and developing countries. Thailand emphasised that APTs can severely disrupt essential services, with potentially cascading effects on national stability. The importance of tailored assistance to protect CI, including cross-border infrastructure like undersea cables, was echoed by the EU, the USA, the Pacific Islands Forum, and Malawi—underscoring the global stakes involved. Ghana and Fiji underlined that each state must determine for itself what qualifies as critical. Russia opposed listing specific sectors—like healthcare, energy, or finance—in the final text, arguing such references could imply a one-size-fits-all approach. Meanwhile, Israel proposed adding the word ‘malicious’ before ‘ICT attacks’ in the report—it was not explained, though, if there are non-malicious attacks, but an edit was ultimately accepted.

The EU and the USA also highlighted political risks, including threats to democratic institutions and electoral processes, while the USA raised concerns about pre-positioning of malware within CI by potential adversaries, though the lack of consensus kept this issue out of the final report. Still, the overall discussion reflected growing agreement that CI protection must be a core focus of future international cooperation, with stronger commitments and action-oriented measures.

Commercial intrusion tools: A market of growing concern

A particularly vivid discussion continued around the risks posed by the growing global market for commercial ICT intrusion capabilities, or spyware. Several delegations (the EU, the UK, South Korea) explicitly recognised this market as a growing threat to international security, but also to intellectual property (the EU). Ghana drew attention to the Pall Mall process—an initiative aimed at curbing irresponsible proliferation of such tools—as a complementary effort that should inform the OEWG’s work. Brazil and others emphasised the risk of irresponsible use, while Israel raised the issue of the ‘illegitimate dissemination’ of such tools—implicitly suggesting that their spread can sometimes be legitimate, depending on context. 

Debates intensified around conditions for lawful use. A range of countries (South Africa, Iran, France, Australia, Fiji, the UK) stressed that any use must be consistent with international law, legitimate and necessary, and—in some views—aligned with the UN framework on responsible state behaviour. 

However, Russia and Iran resisted tying the use of intrusion capabilities to the framework of responsible state behaviour, warning that this might make the framework seem legally binding and blur the line between voluntary norms and law. Israel further argued that when used in line with the UN framework, such tools should not be seen as threats to international peace.  Some states (South Africa, Australia, Pakistan, France) supported the idea of safeguards and oversight mechanisms, but others (Iran) noted these had not been fully discussed and could be addressed later. Meanwhile, Russia questioned whether the use of commercial intrusion tools for unauthorised access could ever truly align with international law, proposing to delete such references entirely.

Emerging technologies: Risks vs opportunities

Debates around emerging technologies reflected a split between states advocating for proactive recognition of risks and those cautioning against overemphasis. Many countries—especially from the Global South (Indonesia, Qatar, Singapore, Thailand, Colombia, Fiji, the African Group)—called for attention to the security implications of AI, IoT, cloud computing, and quantum technologies. They highlighted the dual-use nature of these tools, particularly AI-generated malware, deepfakes, and synthetic content, and stressed that such technologies are already being misused in ways that could threaten international peace (as noted by Indonesia and Mauritius). In contrast, tech-leading states like the USA and Israel warned against placing disproportionate focus on risks, arguing it could overshadow opportunities. The EU, meanwhile, urged caution to avoid duplicating work done in other forums, particularly on AI.

In practical terms, many states (Canada, UK, El Salvador, Pakistan) supported the deployment of post-quantum cryptographic solutions, though others (Russia) considered such steps premature. There was also strong support (UK, Canada, Malaysia, Qatar, Fiji) for naming specific emerging infrastructures—like 5G, IoT, VPNs, routers, and even data centres and managed service providers—as relevant to security discussions. Malaysia highlighted the need for changing the language related to technologies to terms that are also understandable to technical communities – a useful reminder that these processes shouldn’t be left to diplomats alone. Still, some states (Russia, the USA, Israel) pushed to streamline or remove these references, citing concerns over technical detail and the need for broader consensus. The question of whether technologies are neutral sparked philosophical disagreement—Cuba and Nicaragua said no; Switzerland reminded that the agreed language in the third APR from 2024 (par.22) says yes.

New emphasis: Data, disinformation, and supply chain security

The growing strategic importance of data governance was emphasised by several states. Türkiye called for stronger protections around cross-border data flows, personal data, and mechanisms to prevent the misuse of sensitive information, highlighting the need to integrate data security into broader risk management frameworks. Mauritius linked data and responsible innovation, while China reiterated its long-standing proposal for a global data security initiative that could guide international cooperation in this domain.

Disinformation—particularly the use of deepfakes and manipulated content to destabilise institutions—was raised as an urgent and evolving threat. The African Group, represented by Nigeria, emphasised its damaging impact on post-conflict recovery and political transitions, especially in fragile states. Egypt echoed this concern, warning that misinformation campaigns disproportionately affect developing countries, increasing their risk of relapse into instability. China added concerns about the politicisation of disinformation, especially in the context of attributing cyber incidents.

On supply chain security, states agreed about the importance of adopting a security-by-design approach throughout the ICT lifecycle. The proponent, Ghana – supported by Colombia, the UK, and Fiji – stressed this as a baseline measure to address vulnerabilities. Türkiye added that global standards and best practices must be matched by practical implementation frameworks that consider varying national capacities and promote trust across jurisdictions.

Partnerships and cooperation: Making cybersecurity work in practice

The OEWG discussions underscored strong support for enhancing public-private partnerships (PPP) and the role of CERT-to-CERT cooperation as practical tools in addressing cyber threats. A wide range of states—the EU, Canada, Indonesia, Ghana, Singapore, Malawi, Malaysia, Fiji, and Colombia—welcomed explicit recognition of these mechanisms. Several countries (e.g. Mauritius, Thailand) stressed the growing importance of cross-regional cooperation, particularly as cyber threats increasingly affect privately owned infrastructure and cross-border systems. The EU called for greater multidisciplinary dialogue among technical, legal, and diplomatic experts.

Switzerland and Colombia emphasised the role of regional organisations as facilitators for implementing the global framework. Singapore offered the newly established ASEAN regional CERT and information-sharing mechanism as a model. 

While many acknowledged the expanding role of the private sector, Türkiye noted that its responsibilities remain insufficiently defined, suggesting further dialogue is needed to clarify how private actors can contribute to addressing systemic vulnerabilities and managing major incidents. Türkiye also suggested the UN Technology Bank to support cybersecurity capacity building for least developed countries (LDCs) as part of broader digital transformation efforts and promoting secure digital development.

The outcomes

The final document reflects several negotiated compromises. The aspiration to promote ICTs for exclusively peaceful purposes was softened by removing ‘exclusively,’ while a new reference acknowledges the need to use ICTs in a manner consistent with international law (para. 15). Criminal activities ‘could potentially’ impact international peace and security (para. 16). A specific list of critical infrastructure was removed, but protection of cross-border CI is newly emphasized (para. 17), along with the inclusion of security-by-design in the context of vulnerabilities and supply chains (para. 23). Ransomware remains mentioned (para. 24), though a dedicated paragraph was not added. Concerns over commercially available intrusion tools are retained, calling for ‘meaningful action’ and use consistent with international law (para. 25). Risks from emerging technologies are underlined with adjusted specific terminology (para. 20), while the paragraph on AI and quantum (para. 26) was shortened, though still references LLMs and quantum cryptography. A previous reference stating that ICT use ‘in a manner inconsistent with the framework … undermines international peace and security, trust and stability’ was removed.

Norms: Implementing existing ones or developing new ones
 Body Part, Hand, Person, Handshake, Animal, Dinosaur, Reptile

The central debate, as it was at earlier sessions, revolved around whether the OEWG should prioritise developing new norms or focus on implementing the agreed voluntary, non-binding norms. The Voluntary Checklist of Pratical Actions was also discussed.

Implementation and operationalisation: The priority for many

Many Western and like-minded states stressed the implementation of norms. In particular, the Republic of Korea underlined the importance of focusing on implementing and operationalising existing norms rather than creating new ones. The USA, the Netherlands, Canada, and others expressed concern about placing undue emphasis on developing additional norms and advocated for removing paragraphs 34R and 36 of Rev.1. The EU maintained that decisions on developing new norms should be left to the future permanent mechanism, and called for more attention to norms implementation and capacity building.

Several developing countries supported this focus but noted capacity constraints. Fiji, speaking on behalf of the Pacific Islands Forum, noted the different stages of norms operationalisation among members and cautioned against moving forward with new norms without consensus or a clear gap analysis. Ghana welcomed a whole-of-government approach to the implementation, but also stressed the need to raise awareness of these norms at the national level. 

Work on new norms: A red line for some

In contrast, another group of states advocated for continued work on new norms. Russia argued there was a biased reflection favouring norms implementation and insisted on language supporting the development of legally binding measures, highlighting the initially agreed mandate for the UN OEWG. Iran warned that removing subparagraphs in paragraph 34 as well as paragraph 36 would undermine the section’s overall balance. 

China called for a balance between norms and international law and proposed to delete paragraph 34H, arguing it was not balanced as it focused only on non-state actors and commercially available ICT intrusion capabilities while ignoring states as the major source of threat. China noted that countries that support the retention of paragraph 34H are countries that are opposing the creation of new norms, also commenting on perceived inconsistency among those opposing the creation of new norms while advocating for implementation. In the final report, the wording was adjusted (in paragraph 34F) to reference both state and non-state actors. 

Walking the middle path on norms development

In the meantime, some countries attempted to take the middle ground. Singapore supported implementing existing norms while leaving space for new ones, noting that implementation is necessary to understand what new norms are needed. Indonesia expressed a similar view.

Voluntary Checklist of Practical Actions: Deferred 

The Voluntary Checklist of Practical Actions received broad support with some exceptions. While the UK called it a valuable output of the OEWG, and Ireland described it as an effective capacity-building tool, Russia and Iran opposed its adoption as it was formulated in paragraph 37 of Rev. 1, arguing it had not been fully discussed and should be deferred to the future mechanism.

At the same time, some additional proposals were shared, for example, Cameroon called for a working group on accountability for attacks on critical health infrastructure, while China reminded of the data security initiative and broader data security measures.

The outcome

In the Final Report, paragraph 34 and its subparagraphs were significantly condensed. Detailed proposals in Rev.1 were reduced to a shorter list (34a–h). Technical specifics, such as templates and gender considerations, were simplified or removed. While Rev.1 stated that developing new norms and implementing existing ones were not mutually exclusive and recommended compiling and circulating a non-exhaustive list of proposals in this context, the Final Report significantly softened this language. It retained the idea that additional norms could emerge in paragraph 36d but excluded it from recommendations. The checklist, initially proposed for adoption, has been reworded and is now for continued discussion (Recommendation 38 in the Final report).

International law: Deep divisions shape a limited consensus
 Boat, Transportation, Vehicle, Chandelier, Lamp, Scale

The international law section of the Final report reflects the prevailing splits between the states on the need for new binding norms, the applicability of international human rights law and humanitarian law, resulting in a consensus text that fails to reflect the depth and richness of discussions on international law in the past five years. 

The UN Charter: Applicability reaffirmed

Looking in detail, states reaffirmed that international law, in particular the UN Charter applies is applicable and essential to maintaining peace, security and stability and promoting an open, secure, stable, accessible and peaceful ICT environment. Building on the previous work captured in the Annual Reports, the states reaffirmed principles of state sovereignty and sovereign equality (based on the territorial principle), as well as Art. 2(3) and Art. 33(1) of the UN Charter on the pacific settlement of disputes. The reference to Art. 33 (1) has been included in the text despite the request of Iran to remove it, as in their opinion, it lacks consensus and reflects divergence between states.  

Further, the states reaffirmed the Art 2 (4) of the UN Charter on the prohibition of the threat or use of force and the principle of non-intervention. The definition of what may constitute the use of force from Zero Draft (‘An ICT operation may constitute a use of force when its scale and effects are comparable to non-ICT operations rising to the level of a use of force’) supported by the EU, Finland, Italy, Netherlands, Korea, United Kingdom, Australia, and others was taken out, ceding to the requests of Russia, Cuba, Iran, and others.

IHRL and IL: Contentious and omitted 

While the Final report states that the discussions on international law deepened, two topics have not found their place in the text – international human rights law and international humanitarian law. Despite the strong push by the EU, Australia, Switzerland, France, Chile, Colombia, the Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Fiji, Kiribati, Moldova, the Netherlands, Papua New Guinea, Thailand, Vanuatu, Uruguay, Vietnam, Japan, Nigeria on behalf of the African Group and many others who supported the inclusion of references to the applicability of international human rights law and humanitarian law as part of the consensus in the Final report. Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Senegal, Sweden, and Switzerland provided statements that referred explicitly to the applicability of international humanitarian law and its principles to be included in the Final Report. Many have mentioned the depth of work in this area, as well as the Resolution on Protection of Civilians of the 34th Conference of the Red Cross and Red Crescent Movement, a consensus document. On the other hand, Russia considered that the work on the protection of civilians was not consensus-based, and Belarus, Venezuela, Burkina Faso, the Democratic People’s Republic of Korea, Iran, China, Cuba, Nicaragua, Russia, and Eritrea considered the applicability of international humanitarian law a contentious topic on which there is a clear disagreement.

Additional binding obligations: The door is open

The Final Report keeps the door open for discussions on the possibility of future elaboration of additional binding obligations, if appropriate, and the development of additional legally-binding obligations. In its statement on the Final Report, Russia is already pushing for the Global Mechanism to focus, among other issues, on developing new legally binding norms in the field of digital security. 

What’s missing?

The Final Report does not include references to a variety of resources that could have been the basis for discussions in the future process, from the above mentioned ICRC report, to the Common African Position, the Declaration by the European Union and its member states on a Common Understanding of the Application of International Law to Cyberspace, Updated concept for a convention of the UN on ensuring international information security (by Belarus, the Democratic People’s Republic of Korea, Nicaragua, Russia and Syria), as well as Working Paper on the Application of international humanitarian law to the use of information and communication technologies in situations of armed conflicts by Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Senegal, Sweden and Switzerland and the working paper Working Paper on the application of international law in the use of ICTs: areas of convergence outlining proposed text for inclusion in the 2025 Final Report international law section by Australia, Chile, Colombia, the Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Fiji, Germany, Kiribati, Moldova, the Netherlands, Papua New Guinea, Romania, Thailand, Uruguay, Vanuatu, and Viet Nam.

The bottom line

The recommendations for the Global Mechanism in relation to the subject matter of international law reiterate further discussions on how international law applies, pushing the divides in this area into the future. The main achievement in the international law section, according to the Final Report, is the voluntary exchanges of national positions and the commitment to increased capacity building in this area, which was highlighted by the small and developing countries.

Capacity building: A fractured path to operationalisation
 Art, Drawing, Doodle, Crib, Furniture, Infant Bed

Echoing previous sessions, there was broad recognition of capacity building’s foundational role in implementing norms, fostering international legal dialogue, and reinforcing confidence-building measures. Yet, as the final OEWG session unfolded, this familiar consensus was accompanied by a renewed urgency to move beyond conceptual alignment. Action-oriented capacity building became a recurring buzzword, capturing the shared ambition to shift from declaratory commitments toward concrete, needs-based mechanisms. This convergence created early momentum for advancing capacity building structures. Still, despite alignment on principles, the pathway to operationalisation remained fractured along critical lines.

What role for the UN?

During negotiations, two opposing positions reflected fundamentally different priorities: Western states emphasised flexibility and minimal commitments, while many developing countries viewed the early operationalisation of capacity building as essential to anchoring the future mechanism in tangible delivery and ensuring it addresses the digital divide. At one end of the spectrum, the USA opposed all new CB mechanisms and rejected any operational role for the UN, citing its ongoing financial crisis. France and Canada adopted a more cautious stance, advocating a step-by-step approach centred on mature initiatives and warning against the premature creation of new structures. 

In contrast, countries such as Nigeria (on behalf of the African Group), Tunisia (on behalf of the Arab Group), Brazil, Iran, and Egypt called for a more active UN role, supported by predictable and well-resourced mechanisms, including calls to include more concrete language on the operationalisation of a UN Voluntary Fund. Consistent with this approach, the African Group, Latin American states, and others backed the creation of a Dedicated Thematic Group (DTG) on CB within the permanent mechanism to ensure coordination, needs mapping, implementation tracking, and inclusive participation, functions they feared would be sidelined if CB remained a merely cross-cutting issue. The USA and Canada opposed this, arguing that issue-specific groups risked bureaucratic redundancy and inefficiency.

The outcome

The final outcome reflects a carefully negotiated compromise: it advances the institutional scaffolding of the future mechanism but falls short of the ambitions expressed by many developing states. The agreement to establish a DTG on capacity building stands out as a meaningful step, providing formal recognition of CB as a core pillar. 

Yet, substantive elements, particularly related to funding, were left unresolved. The UN-run Global ICT Security Cooperation and Capacity-Building Portal (GSCCP) will proceed through a modular, step-by-step development model, and roundtables will continue to promote coordination and information exchange. However, proposals for a UN Voluntary Fund and a fellowship program were deferred, with references downgraded to non-binding language and postponed for further consideration. 

While the framework reflects principles of gradualism and inclusivity, it also exposes the limits of consensus: Western states succeeded in prioritising flexibility and minimal commitments, while developing countries, especially those from the Arab and African Groups, voiced frustration that the outcome lacked the concrete, adequately resourced mechanisms needed to close enduring digital divides. Without progress on predictable funding and operational tools, they warned, the credibility and effectiveness of the DGT group on CB risks would be undermined from the outset.

Confidence-building measures (CBMs): A subdued discussion
 Weapon, Animal, Kangaroo, Mammal, Text

CBMs have been one of the main areas of progress in recent years within the OEWG process. However, the discussions during the most recent session were notably subdued. 

New CBMs: Overcommitting or not?

A few new proposals were tabled. Indeed, a clear—and by now long-standing—consensus has emerged among several delegations, including the EU, Canada, the Netherlands, Ukraine, New Zealand, Australia, and the USA, that the OEWG’s final report should avoid overcommitting to new CBMs.

This position was the principal counterpoint to Iran’s longstanding proposal for a new CBM aimed at ensuring unhindered access to a secure ICT market for all states. Although this proposal did not gain significant traction in earlier discussions, it became a central point of contention during the latest round of negotiations. States such as Brazil and El Salvador expressed support for retaining this reference, but others—including the Netherlands, the USA, New Zealand, Australia, and Switzerland—firmly rejected its inclusion, citing both the absence of consensus and the need to prioritise the implementation of the eight CBMs agreed under the OEWG framework. Switzerland proposed relocating this reference to the capacity-building section, where states could voluntarily provide others with ICT tools to strengthen capacity. 

The standardised template for communication: First time discussed in the plenary

First circulated in April 2025, the standardised template developed by the Secretariat had not yet been discussed in plenary. Some delegations—notably Qatar and the Republic of Korea—expressed their preference to keep the template flexible and voluntary. Thailand proposed enhancing the template by incorporating elements such as urgency and confidentiality to help states identify operational needs in sensitive contexts. Nevertheless, the proposal received a lukewarm reception from the EU and the Netherlands, with the latter calling for its removal from the final report. 

Responsible reporting of ICT vulnerabilities, norm J)

A final point of contention that was excluded from the final report concerned the inclusion of norm J), which pertains to the responsible reporting of ICT vulnerabilities, under the CBM section. While El Salvador supported its inclusion, the Netherlands, the EU, and Israel strongly opposed this characterisation. The Netherlands questioned the logic of singling out this particular norm over others, while Israel argued that this issue had not been substantively deliberated and therefore should not appear under the CBM heading.

The result

While Iran’s proposal did not make it onto the formal list of CBMs, it remains referenced in the final report for potential consideration within the future permanent mechanism. Although it was initially the Chair’s ambition to include the standardised template of communication, it ultimately was not retained. Norm J) was not included in the CBMs section.

Regular institutional dialogue: Framing the future
 Accessories, Sunglasses, Glasses, Earring, Jewelry, Text

Thematic groups: Debating the design

One of the most significant debates during the session centred on the thematic groups to be established under the future mechanism. These groups were originally conceived as a means to allow delegations to deepen discussions on key issues. However, countries quickly ran into a stumbling block: how many thematic groups should there be, and what topics should they cover? While views varied, the vast majority of states, as well as the Chair, agreed that this was a matter that had to be resolved during this final substantive session of the OEWG. Deferring the decision to the future global mechanism, they warned, would risk unnecessary delays in getting the new process off the ground.

Zero Draft: The starting point for negotiations

Chair’s Zero Draft proposal was the basis for the beginning of discussions on this issue. His initial proposal was 3 DTGs:

  • The first would focus on action-oriented measures to enhance state resilience and ICT security, protecting critical infrastructure, and promoting cooperative action to address threats in the ICT environment. (DTG1)
  • The second group would continue the discussions on how international law applies to the use of ICTs in the context of international security. (DTG2)
  • The third group would address capacity-building in the use of ICTs, with an emphasis on accelerating practical support and convening the Global Roundtable on ICT security capacity-building on a regular basis. (DTG3)

This proposal is what the states discussed Monday through Wednesday. A number of states, for instance, Nigeria, Senegal, South Africa, Thailand, Colombia, Cote d’Ivoire, Indonesia, Brazil, El Salvador, Botswana, expressed support for the creation of the three proposed DTGs. Some countries suggested minor changes, for example, Indonesia suggested that DTG1 can be streamlined to resilience and ICT security of states. South Africa suggested that clearly showing how time will be divided among the group’s workstreams in the illustrative timeline would be very helpful.

However, a number of countries were against DTG1. Nicaragua noted that the scope and approach of DTG1 are not clear, and that greater discussion is needed. Iran similarly noted that the mandate of DTG1 remains vague and overly complex and therefore requires further strengthening and clarification in line with the pillars of the OEWG. China cited the use of vague terms like ‘resilience’ that could undermine the OEWG’s agreed framework. Russia cautioned that the discussion of the three pillars of the mandate within the same group may be challenging. Russia also stated that norms and CBMs deserve separate groups. Nicaragua suggested establishing a separate thematic group on norms. South Africa was in favour of a DTG2 that would discuss norms in addition to international law. Belarus suggested a thematic group on standards and on CBMs.

DTG2 was much debated. A number of countries were in favour, for various reasons. For instance, Switzerland and Mauritius noted that such a group should discuss how existing international law applies in cyberspace. Mexico highlighted that states need to have a permanent space in which to review, when appropriate, their compatibility with the existing legal framework. Thailand noted that this group will enable focused and sustained discussion, including on related capacity building, aimed at bridging legal and technical gaps and promoting more inclusive participation by states on this specialised topic. On the other hand, Zimbabwe noted that the DTG could help elaborate a comprehensive legal instrument to codify the applicable rules and principles governing state conduct in cyberspace. 

However, various reasons against establishing DTG2 were also brought up. The EU emphasised that the OEWG’s five pillars are interdependent, and isolating one—such as international law—risks siloed, incoherent outcomes. Australia, Romania and Estonia echoed this view, arguing that international law should be addressed through cross-cutting DTGs. In China’s view, DTG 2 undermines the balance between norms and international law. 

The USA opposed DTG2, citing that some states have already affirmed that they will seek to use conversations in the international law DTG to advance new legally binding obligations contrary to the consensus spirit of the OEWG.

However, seemingly in response, Egypt stated that states should not preempt the discussions in DTGs. It stressed that the groups are intended for open dialogue, as has been the practice over the past four years, without any predetermined conclusions. Egypt underlined that, according to Paragraph 15 of the OEWG report, any recommendations emerging from the DTGs will remain draft and subject to consensus-based decision-making.

Much support was expressed for DTG3. Nigeria, on behalf of the African Group, said the group would offer a focused platform to strengthen developing countries and bridge the digital gap. Paraguay supported a specialised working group to facilitate national efforts in policy development and information exchange. Mexico emphasised that the DTG could help develop action-oriented recommendations, map needs and resources, follow up on implementation, coordinate with the global roundtable, and promote diversity and inclusion. El Salvador highlighted the importance of the DTG for Central America, noting it should not be limited to financing but also cover technical assistance and knowledge exchange. Botswana noted that the DTG will assist states in organising national cybersecurity efforts, developing policy frameworks, protecting critical and information infrastructures, implementing existing voluntary norms, and formulating national positions on the applicability of international law in cyberspace. Uruguay noted that DTG would go beyond training to identify specific needs and ensure targeted support, allowing for a more comprehensive approach to capacity building.

Indonesia said the group should focus on CBMs, technical training, capacity needs of developing countries, and strengthening initiatives like the Global PoC Directory and the new Global ICT Security Cooperation and Capacity Building Portal. South Africa suggested that discussions on CBMs could be placed under this DTG instead of DTG1, if states agreed. 

France’s detailed proposal was highly regarded by many delegations, such as Australia, the USA, Finland, Switzerland, Italy, South Korea, Denmark, Japan, Canada, Sweden, Romania, and Estonia. This proposal, regarded as an honest bridging proposal, suggested three thematic groups, which would draw on the pillars of the framework for responsible State behaviour in the use of ICT. They would consider, in an integrated, policy-oriented and cross-cutting manner, action-oriented measures to:

  • Increase the resilience and ICT security of states, including the protection of critical infrastructure, with a focus on capacity-building in the use of ICTs in the context of international security, and to convene the dedicated Global Roundtable on ICT security capacity-building (DTG1)
  • Enhance concrete actions and cooperative measures to address ICT threats and to promote an open, secure, stable, accessible and peaceful ICT environment, including to continue the further development and operationalisation of the Global POC Directory (DTG2)
  • Promote maintaining peace, security and stability in the ICT environment (DTG3)

Australia noted that the proposal explicitly draws on the five pillars of the framework in each dedicated thematic group. Australia, the USA, Switzerland, and Estonia noted that the proposal is action-oriented. Per South Korea, the proposal would allow for more practical and integrated discussion. 

Rev 2: Down to DTG1 and DTG2

However, the Chair’s Rev2 brought significant changes to DTGs. It suggested:

  • An integrated, policy-oriented and cross-cutting dedicated thematic group drawing on the five pillars of the framework to address specific challenges in the sphere of ICT security in the context of international security in order to promote an open, secure, stable, accessible, peaceful, and interoperable ICT environment, with the participation of, inter alia, technical experts and other stakeholders. (DTG 1) 
  • An integrated, policy-oriented and cross-cutting dedicated thematic group drawing on the five pillars of the framework to accelerate the delivery of ICT security capacity-building, with the participation of, inter alia, capacity-building experts, practitioners, and other stakeholders. (DTG 2)

DTG1 was not met with much enthusiasm. Ghana noted that the DTG1 lacks clarity on how the various focus areas will be discussed and effectively distributed within the allocated time frame. Russia also noted that it is unclear what exactly the group will work on. Nicaragua noted that the group’s scope is overstretched, while El Salvador warned against excessive generalisation of discussions. Nicaragua and Russia noted the risks of duplication of discussions in the DTG1 and the plenary sessions. France and the USA regretted the removal of language around cooperation, resilience, and stability.

Delegations made a few suggestions to improve DTG1. Canada called for clearer language and a focus on critical infrastructure. Ghana suggested that either a clearer framework for the internal distribution of time among the focus areas be established, or the OEWG revert to the three DTGs suggested in Rev1. Nicaragua suggested that the OEWG establish the DTG2 on capacity building and defer the decision on other possible DTGs to the organisational session of the future permanent mechanism in March 2026. 

A small number of countries, namely Indonesia, Turkiye, the Philippines, Ukraine, and Pakistan, accepted the new DTG1 as outlined in Rev 2. 

A number of countries expressed regret at the removal of the DTG on international law. Among them were Nigeria on behalf of the African Group, Egypt, Colombia, El Salvador, Russia, Brazil, and Mauritius. However, this group did not make it into the Final report. Brazil, for instance, noted that it will be difficult to ensure the meaningful participation of legal experts when the issue of international law is so diluted in DTG 1’s overly broad mandate. Egypt stated that the group on international law, along with the group on capacity building, were the source of balance vis-a-vis DTG1 and its everything, everywhere, all at once approach. Tunisia, on behalf of the Arab Group, noted that it will ask the chair of the mechanism to hold a conference on the application of international law, while Egypt was in favour of a roundtable. 

DTG2 on capacity building, which was widely supported as DTG3 while countries were still discussing Rev1, wasn’t much discussed as it seemed countries were in favour of establishing it. Canada called for a clear link and no duplication between the global roundtable on capacity building on capacity building and DTG2. France and Australia suggested that DTG2 be responsible for organising the global roundtable on capacity building as well as its follow-up.  Costa Rica emphasised the need to include more operational detail, such as identifying, planning, and implementing capacity building, as well as improving the connection between providers and recipients. However, Egypt stressed that without concrete steps—such as establishing a UN-led capacity building vehicle, activating the Voluntary Fund and Sponsorship Program, and ensuring predictable resources—the DTG2 discussions would fall short of their potential and risk undermining the credibility of the new mechanism.

Additional ad hoc groups

Thailand, Côte d’Ivoire, South Africa, and Colombia supported the idea of creating additional ad hoc dedicated thematic groups with a fixed duration to engage in focused discussions on specific issues as necessary, while Iran noted that such groups must be created by consensus. Australia opposed ad hoc groups, noting that they could create additional uncertainties and potential burdens for smaller delegations. 

Multistakeholder engagement in UN cyber dialogue: An old issue persistently on the agenda

Should a state be able to object to an MSH participating in the OEWG? Opinions are divided.

Answer A: Yes, the principle of non-objection must be observed

A group of states is saying YES. Türkiye, Iran, Nigeria on behalf of the African Group, China, Zimbabwe, Nicaragua, Tunisia on behalf of the Arab Group, Indonesia, Egypt, Nicaragua, Russia, and Cuba advocated for keeping the current modalities of stakeholder engagement. Per these modalities, ECOSOC-accredited stakeholders may attend formal OEWG meetings without addressing them, speak during a dedicated stakeholder session, and submit written inputs for the OEWG website. Other relevant stakeholders may also apply by providing information on their purpose and activities; they may be invited to participate as observers, subject to a non-objection process. A state may object to the accreditation of specific non-ECOSOC-accredited organisations, and must notify the OEWG Chair that it is objecting. The state may, on a voluntary basis, share with the Chair the general basis of its objections.

Iran supported the proposal made by Russia during the town hall consultations to empower the chair and the secretariat of the future permanent mechanism to assess the relevance of ECOSOC-accredited NGOs that have applied to participate in the mechanism and to inform the state of the outcome of such assessment. Egypt stated that it does not see merits in the additional consultative layers that will overload the chairperson of the future permanent mechanism without necessarily resolving any potential divergence of views.

China questioned the push for increased NGO participation when member state concerns remain unresolved and highlighted the issue of inappropriate remarks by states, raising doubts about ensuring appropriate NGO contributions.

This group of states does not want experts participating in DTGs. Russia and Nicaragua noted that the DTGs are to provide a platform for dialogue, specifically for government experts. Iran stated that, given that technical experts from states will participate in the thematic groups and will engage in technical rather than political or diplomatic discussions, the expert briefings, as well as the participation of other stakeholders in DTGs, don’t offer additional value and could therefore be deleted. 

Answer B: No, multistakeholder participation cannot be limited

Their much different position is outlined in the paper titled ‘Practical Modalities for Stakeholders’ Participation and Accreditation Future UN Mechanism on Cybersecurity,’ co-ordinated by Chile and Canada and supported by 42 states. 

This group notes that a state may object to the accreditation of specific non-ECOSOC-accredited organisations. However, the notice of intention to object shall be made in writing and include, separately for each organisation, a detailed rationale for such objection(s). One week after the objection period ends, the Secretariat will publish two lists: one of accredited organisations and another of those with objections, including the objecting state(s) and their reasons. These lists will be made public. At the next substantive plenary session, any state that filed an objection may formally oppose the accreditation. If the Chair considers that every effort to reach an agreement by consensus has been exhausted, a majority vote of members present and voting may be held to decide on the contested accreditations, following the Rules of Procedure of the UN General Assembly.

This group has also proposed broader participation rights for stakeholders in the future mechanism. Their proposal includes:

  • Allowing stakeholders to deliver oral statements and participate remotely in plenary sessions, thematic groups, and review sessions.
  • Permitting non-accredited stakeholders to attend plenary sessions silently.
  • Granting the Chair (or Vice Chairs) the authority to organise technical briefings by stakeholders and states during key sessions, ensuring geographic balance and gender parity, and fostering two-way interaction.
  • Enabling Chairs (or Vice Chairs) of thematic groups to invite stakeholders to submit written reports, give presentations, and provide other forms of support.

The proposal, its proponents believe, is a fair and practical way to enhance stakeholder participation in the future mechanism by promoting transparency and inclusiveness.

Answer C: Yes, but!

The Chair’s proposal tried to bridge these two positions. If a member state objects to accrediting a stakeholder, it must inform the Chair and may voluntarily share the general reason for the objection. The Chair will then consult informally with all member states for up to three months to try to resolve the concern and facilitate accreditation. After the consultations, if a consensus has been reached, the Chair may propose to the Global Mechanism to confirm the accreditation. If consensus is not yet possible, the Chair will continue informal consultations as appropriate. Therefore, this proposal contains the principle of objection, but that can also be revoked.

Accredited stakeholders will be able to attend key sessions, submit written inputs, and deliver oral statements during dedicated stakeholder sessions. They may also speak after member states at substantive plenary sessions and review conferences, time permitting and at the Chair’s discretion. The Chair will also hold informal or virtual meetings with stakeholders during intersessional periods. Participation is consultative only—stakeholders would engage in a technical and objective manner, and their contributions ‘shall remain apolitical in nature’. Negotiation and decision-making are exclusive prerogatives of member states.

What’s in a name?

Towards the end of the session, another disagreement popped up: the future permanent mechanism’s very name.

While France suggested that the future mechanism should ‘advance responsible state behavior’, a proposal that had quite some proponents, Iran and Russia, for instance, insisted on using ‘security of and in the use of ICT’, terminology used in the OEWG’s name. 

The outcomes

The final report confirms the establishment of DTG 1 on specific challenges and DTG 2 on capacity building, as outlined in Rev2. The final report acknowledges the possibility of establishing additional ad-hoc dedicated thematic groups. 

The Chair’s proposed modalities were adopted as part of the Final report. Nicaragua, Belarus, Venezuela, China, Cuba, Eritrea, Iran, Niger, Russia, Sudan, and Zimbabwe welcomed that accredited stakeholders will participate on a non-objection basis and obtain a solely consultative status, highlighting that the future permanent mechanism is strictly an intergovernmental process. 

This division on names resulted in the rather unwieldy name of the future permanent mechanism: ‘Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs’.

Next steps

The OEWG wrapped up its work on 11 July, but there is still work to be done before the Global Mechanism actually kicks off. Singapore will table a simple draft resolution in the First Committee to endorse the OEWG’s final report and enable its formal approval by the General Assembly and the Fifth Committee. Emphasising that the resolution should be seen as procedural, not an opportunity to reopen debates, the Chair urged delegations to support a single, unified resolution on ICT security, in line with the agreed single-track process. The organisational session of the Global Mechanism should be held no later than March 2026.

Mark your calendars!

 Text, Advertisement, Poster, Paper

On 23 July, Diplo will host a webinar titled ‘Five years on: Achievements, failures, and the future of the UN Cyber Dialogue’ to explore the OEWG’s achievements in advancing common understandings among states on responsible behaviour in cyberspace, challenges encountered in bridging diverse national positions and operationalising agreed norms, as well as provide an overall look at the process since 2021. Register for the event on the dedicated web page.


UN OEWG proccess


No judges, no appeals, no fairness: Wimbledon 2025 shows what happens when AI takes over

One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.

While the two new champions will remember this tournament for a lifetime, Wimbledon 2025 will also be recalled for another reason: the organisers’ decision to hand over crucial match decisions to AI-powered systems.

The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.

Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.

Wimbledon

Wimbledon as the turning point for AI in sport

The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.

Rather than a triumph of innovation, the tournament became a cautionary tale.

During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.

Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.

Athletes speak out when technology silences the human

Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.

Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.

Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.

Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.

Other sports, same problem: When AI misses the mark

Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.

In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.

Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.

Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.

Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.

From umpire to overseer: When AI watches everything

The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.

Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.

Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.

Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.

The AI paradox: Trust without understanding

The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.

At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.

Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.

That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.

Dependency over judgement and the cost of trusting machines

The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.

What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.

Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.

Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.

A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.

Let AI enhance the game. Let humans keep it human.

 Person, Clothing, Footwear, Shoe, Playing Tennis, Racket, Sport, Tennis, Tennis Racket

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!