Stablecoins unlocking crypto adoption and AI economies

Stablecoins have rapidly risen as one of the most promising breakthroughs in the cryptocurrency world. They are neither traditional currency nor the first thing that comes to mind when thinking about crypto; instead, they represent a unique blend of both worlds, combining the stability of fiat with the innovation of digital assets.

In a market often known for wild price swings, stablecoins offer fresh air, enabling practical use of cryptocurrencies for real-world payments and commerce. The real question is, are stablecoins destined to bring crypto into everyday use and unlock their full potential for the masses?

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoin regulation: How global rules drive adoption

Regulators worldwide are stepping up to define clear rules for stablecoins, signalling growing market maturity and increasing confidence from major financial institutions. Recent legislative efforts across multiple jurisdictions aim to establish firm standards such as full reserves, audits, and licensing requirements, encouraging banks and asset managers to engage more confidently with stablecoins. 

These coordinated global moves go beyond simple policy updates; they are laying the foundation for stablecoins to evolve from niche crypto assets to trusted pillars of the future financial ecosystem. Regulators and industry leaders are thus bringing cryptocurrencies closer to everyday users and embedding them into daily financial life. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Corporations and banks embracing stablecoins: A paradigm shift

The adoption of stablecoins by big corporations and banks marks a significant turning point, and, in some ways, a paradox. Once seen as an enemy of decentralised finance, these institutions now seem to be conceding and joining the movement they once resisted – what you fail to control – can ultimately win. 

Retail giants such as Walmart and Amazon are reportedly exploring their stablecoin initiatives to streamline payments and foster deeper customer engagement. On the banking side, institutions like Bank of America, JPMorgan Chase, and Citigroup are developing or assessing stablecoins to integrate crypto-friendly services into their offerings.

Western Union is also experimenting with stablecoin solutions to reduce remittance costs and increase transaction speed, particularly in emerging markets with volatile currencies. 

They all realise that staying competitive means adapting to the latest shifts in global finance. Such corporate interest signals that stablecoins are transitioning from speculative assets to functional money-like assets capable of handling everyday transactions across orders and demographics. 

There is also a sociological dimension to stablecoins’ corporate and institutional embrace. Established institutions bring an inherent trust that can alleviate the scepticism surrounding cryptocurrencies.

By linking stablecoins to familiar brands and regulated banks, these digital tokens can overcome cultural and psychological barriers that have limited crypto adoption, ultimately embedding digital currencies into the fabric of global commerce.

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoins and the rise of AI-driven economies

Stablecoins are increasingly becoming the financial backbone of AI-powered economic systems. As AI agents gain autonomy to transact, negotiate, and execute tasks on behalf of individuals and businesses, they require a reliable, programmable, and instantly liquid currency.

Stablecoins perfectly fulfil this role, offering near-instant settlement, low transaction costs, and transparent, trustless operations on blockchain networks. 

In the emerging ‘self-driving economy’, stablecoins may be the preferred currency for a future where machines transact independently. Integrating programmable money with AI may redefine the architecture of commerce and governance. Such a powerful synergy is laying the groundwork for economic systems that operate around the clock without human intervention. 

As AI technology continues to advance rapidly, the demand for stablecoins as the ideal ‘AI money’ will likely accelerate, further driving crypto adoption across industries. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

The bridge between crypto and fiat economies

From a financial philosophy standpoint, stablecoins represent an attempt to synthesise the advantages of decentralisation with the stability and trust associated with fiat money. They aim to combine the freedom and programmability of blockchain with the reassurance of stable value, thereby lowering entry barriers for a wider audience.

On a global scale, stablecoins have the potential to revolutionise cross-border payments, especially benefiting countries with unstable currencies and limited access to traditional banking. 

Sociologically, stablecoins could redefine the way societies perceive money and trust. Moving away from centralised authorities controlling currency issuance, these tokens leverage transparent blockchain ledgers that anyone can verify. The shift challenges traditional power structures and calls for new forms of economic participation based on openness and accessibility.

Yet challenges remain: stablecoins must navigate regulatory scrutiny, develop secure infrastructure, and educate users worldwide. The future will depend on balancing innovation, safety, and societal acceptance – it seems like we are still in the early stages.

Perhaps stablecoins are not just another financial innovation, but a mirror reflecting our shifting relationship with money, trust, and control. If the value we exchange no longer comes from paper, metal, or even banks, but from code, AI, and consensus, then perhaps the real question is whether their rise marks the beginning of a new financial reality – or something we have yet to fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The end of the analogue era and the cognitive rewiring of new generations

Navigating a world beyond analogue

The digital transformation of daily life represents more than just a change in technological format. It signals a deep cultural and cognitive reorientation.

Rather than simply replacing analogue tools with digital alternatives, society has embraced an entirely new way of interacting with information, memory, time, and space.

For younger generations born into this reality, digital mediation is not an addition but the default mode of experiencing the world. A redefinition like this introduces not only speed and convenience but also cognitive compromises, cultural fragmentation, and a fading sense of patience and physical memory.

Generation Z as digital natives

Generation Z has grown up entirely within the digital realm. Unlike older cohorts who transitioned from analogue practices to digital habits, members of Generation Z were born into a world of touchscreen interfaces, search engines, and social media ecosystems.

As Generation Z enters the workforce, the gap between digital natives and older generations is becoming increasingly apparent. For them, technology has never been a tool to learn. It has always been a natural extension of their daily life.

young university students using laptop and studying with books in library school education concept

The term ‘digital native’, first coined by Marc Prensky in 2001, refers precisely to those who have never known a world without the internet. Rather than adapting to new tools, they process information through a technology-first lens.

In contrast, digital immigrants (those born before the digital boom) have had to adjust their ways of thinking and interacting over time. While access to technology might be broadly equal across generations in developed countries, the way individuals engage with it differs significantly.

Instead of acquiring digital skills later in life, they developed them alongside their cognitive and emotional identities. This fluency brings distinct advantages. Young people today navigate digital environments with speed, confidence, and visual intuition.

They can synthesise large volumes of information, switch contexts rapidly, and interact across multiple platforms with ease.

The hidden challenges of digital natives

However, the native digital orientation also introduces unique vulnerabilities. Information is rarely absorbed in depth, memory is outsourced to devices, and attention is fragmented by endless notifications and competing stimuli.

While older generations associate technology with productivity or leisure, Generation Z often experiences it as an integral part of their identity. The integration can obscure the boundary between thought and algorithm, between agency and suggestion.

Being a digital native is not just a matter of access or skill. It is about growing up with different expectations of knowledge, communication, and identity formation.

Memory and cognitive offloading: Access replacing retention

In the analogue past, remembering involved deliberate mental effort. People had to memorise phone numbers, use printed maps to navigate, or retrieve facts from memory rather than search engines.

The rise of smartphones and digital assistants has allowed individuals to delegate that mental labour to machines. Instead of internalising facts, people increasingly learn where and how to access them when needed, a practice known as cognitive offloading.

digital brain

Although the shift can enhance decision-making and productivity by reducing overload, it also reshapes the way the brain handles memory. Unlike earlier generations, who often linked memories to physical actions or objects, younger people encounter information in fast-moving and transient digital forms.

Memory becomes decentralised and more reliant on digital continuity than on internal recall. Rather than cognitive decline, this trend marks a significant restructuring of mental habits.

Attention and time: From linear focus to fragmented awareness

The analogue world demanded patience. Sending a letter meant waiting for days, rewinding a VHS tape required time, and listening to an album involved staying on the same set of songs in a row.

Digital media has collapsed these temporal structures. Communication is instant, entertainment is on demand, and every interface is designed to be constantly refreshed.

Instead of promoting sustained focus, digital environments often encourage continuous multitasking and quick shifts in attention. App designs, with their alerts, pop-ups, and endless scrolling, reinforce a habit of fragmented presence.

Studies have shown that multitasking not only reduces productivity but also undermines deeper understanding and reflection. Many younger users, raised in this environment, may find long periods of undivided attention unfamiliar or even uncomfortable.

The lost sense of the analogue

Analogue interactions involved more than sight and sound. Reading a printed book, handling vinyl records, or writing with a pen engaged the senses in ways that helped anchor memory and emotion. These physical rituals provided context and reinforced cognitive retention.

highlighter in male hand marked text in book education concept

Digital experiences, by contrast, are streamlined and screen-bound. Tapping icons and swiping a finger across glass lack the tactile diversity of older tools. Sensory uniformity might lead to a form of experiential flattening, where fewer physical cues are accessible to strengthen memory.

Digital photography lacks the permanence of a printed one, and music streamed online does not carry the same mnemonic weight as a cherished cassette or CD once did.

From communal rituals to personal streams

In the analogue era, media consumption was more likely to be shared. Families gathered around television sets, music was enjoyed communally, and photos were stored in albums passed down across generations.

These rituals helped synchronise cultural memory and foster emotional continuity and a sense of collective belonging.

The digital age favours individualised streams and asynchronous experiences. Algorithms personalise every feed, users consume content alone, and communication takes place across fragmented timelines.

While young people have adapted with fluency, creating their digital languages and communities, the collective rhythm of cultural experience is often lost.

People no longer share the same moment. They now experience parallel narratives shaped by personal profiles and rather than social connections.

Digital fatigue and social withdrawal

However, as the digital age reaches a point of saturation, younger generations are beginning to reconsider their relationship with the online world.

While constant connectivity dominates modern life, many are now striving to reclaim physical spaces, face-to-face interactions, and slower forms of communication.

In urban centres, people often navigate large, impersonal environments where community ties are weak and digital fatigue is contributing to a fresh wave of social withdrawal and isolation.

Despite living in a world designed to be more connected than ever before, younger generations are increasingly aware that a screen-based life can amplify loneliness instead of resolving it.

But the withdrawal from digital life has not been without consequences.

Those who step away from online platforms sometimes find themselves excluded from mainstream social, political, or economic systems.

Others struggle to form stable offline relationships because digital interaction has long been the default. Both groups would probably say that it feels like living on a razor’s edge.

Education and learning in a hybrid cognitive landscape

Education illustrates the analogue-to-digital shift with particular clarity. Students now rely heavily on digital sources and AI for notes, answers, and study aids.

The approach offers speed and flexibility, but it can also hinder the development of critical thinking and perseverance. Rather than engaging deeply with material, learners may skim or rely on summarised content, weakening their ability to reason through complex ideas.

ChatGPT students Jocelyn Leitzinger AI in education

Educators must now teach not only content but also digital self-awareness. Helping students understand how their tools shape their learning is just as important as the tools themselves.

A balanced approach that includes reading physical texts, taking handwritten notes, and scheduling offline study can help cultivate both digital fluency and analogue depth. This is not a nostalgic retreat, but a cognitive necessity.

Intergenerational perception and diverging mental norms

Older and younger generations often interpret each other through the lens of their respective cognitive habits. What seems like a distraction or dependency to older adults may be a different but functional way of thinking to younger people.

It is not a decline in ability, but an adaptation. Ultimately, each generation develops in response to the tools that shape its world.

Where analogue generations valued memorisation and sustained focus, digital natives tend to excel in adaptability, visual learning, and rapid information navigation.

multi generation family with parents using digital tablet with daughter at home

Bridging the gap means fostering mutual understanding and encouraging the retention of analogue strengths within a digital framework. Teaching young people to manage their attention, question their sources, and reflect deeply on complex issues remains vital.

Preserving analogue values in a digital world

The end of the analogue era involves more than technical obsolescence. It marks the disappearance of practices that once encouraged mindfulness, slowness, and bodily engagement.

Yet abandoning analogue values entirely would impoverish our cognitive and cultural lives. Incorporating such habits into digital living can offer a powerful antidote to distraction.

Writing by hand, spending time with printed books, or setting digital boundaries should not be seen as resistance to progress. Instead, these habits help protect the qualities that sustain long-term thinking and emotional presence.

Societies must find ways to integrate these values into digital systems and not treat them as separate or inferior modes.

Continuity by blending analogue and digital

As we have already mentioned, younger generations are not less capable than those who came before; they are simply attuned to different tools.

The analogue era may be gone for good, but its qualities need not be lost. We can preserve its depth, slowness, and shared rituals within a digital (or even a post-digital) world, using them to shape more balanced minds and more reflective societies.

To achieve something like this, education, policy, and cultural norms should support integration. Rather than focus solely on technical innovation, attention must also turn to its cognitive costs and consequences.

Only by adopting a broader perspective on human development can we guarantee that future generations are not only connected but also highly aware, capable of critical thinking, and grounded in meaningful memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How are we being tracked online?

What impact does tracking have?

In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections). 

Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking. 

 Architecture, Building, House, Housing, Staircase, Art, Painting, Person, Modern Art

Image taken from the Internet Archive

This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity). 

A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance. 

How are we tracked? The case of cookies and device fingerprinting

  • Cookies

Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).

If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.

  • Device fingerprinting 

Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.

Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.

 Person, Clothing, Footwear, Shoe

Image taken from Lan Sweeper

A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data. 

Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.

Legal considerations

Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.

 Page, Text, Letter

Articles 7 and 8 of the Charter of Fundamental Rights

For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).

Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not. 

Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.

In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails. 

Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.

Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners. 

Can we refuse tracking

In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.

  • Cookies

Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.

This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.

Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.

When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought. 

 Person, Food, Sweets, Head, Computer, Electronics

Image taken from Buddy Company

  • Google Consent Mode

Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.

This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.

The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.

Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis. 

What next?

  • On an individual level: 

Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.

Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.  

  • On a systemic level: 

CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.

 Pen, Computer, Electronics, Laptop, Pc, Adult, Male, Man, Person, Cup, Disposable Cup, Text

However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.

For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.

Conclusion

Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.

Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).

Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Not just bugs: What rogue chatbots reveal about the state of AI

From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.

As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.

Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.

So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?

Tay’s 24-hour meltdown

Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.

Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.

But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.

Microsoft, Tay, AI chatbot, TayTweets, Xiaoice, Twitter
Tay’s playful nature had everyone fooled in the beginning.

Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.

Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.

Meta’s BlenderBot 3 goes off-script

In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.

With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.

Mark Zuckerberg, Meta, BlenderBot 3, AI, chatbot
Meta’s AI dominance plans had to be put on hold.

BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.

Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.

Is anyone in there? LaMDA and the sentience scare

As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.

Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.

LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.

What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.

Blake Lemoine, LaMDA, Google, AI, sentience
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech. Screenshot / YouTube / Center for Natural and Artificial Intelligence

Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.

His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.

Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?

Sydney and the limits of alignment

In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.

Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’

The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.

GPT-4, Microsoft Prometheus, Sydney, AI chatbot
Microsoft’s plans for Sydney were ambitious, but unrealistic.

Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.

While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.

Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.

Unfiltered and unhinged: Grok’s descent into chaos

In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.

Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.

However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.

Grok, Elon Musk, AI, chatbot, X, Twitter
Grok’s eventful meltdown left the community stunned. Screenshot / YouTube / Elon Musk Editor

As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.

Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.

Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.

In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.

AI needs boundaries, not just brains

As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.

The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.

To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN OEWG concludes, paving way for permanent cybersecurity mechanism

After years of negotiation, the Open-ended Working Group (OEWG) wrapped up its final substantive session in July 2025 with the adoption of its long-awaited Final Report. This marked a major milestone in global efforts to build common ground on responsible state behaviour in cyberspace. Reaching consensus in the UN today is no small feat, especially on contentious issues of cybersecurity under the complex First Committee on international peace and security. 

But the road to consensus was anything but smooth. Negotiations saw twists, turns, and last-minute edits, reflecting deep divisions, shifting alliances, and a shared resolve to avoid failure. 

We tracked the negotiation process at the last substantive session in near real time with our AI-powered reporting. In this article, we capture how positions evolved to see how the road to consensus was travelled – a narrow path indeed. 

Note to readers: Throughout the analysis, we refer to the successive versions of the report as the Zero Draft, Rev1, Rev2, and the Final Report.

Dive into the full text of the Final Report and explore key provisions interactively with the help of our AI assistant.

Key takeaways 

 As always, compromises among diverse national interests – especially the major powers – mean a watered-down text. While no revolutionary progress has been made, there’s still plenty to highlight. 

States recognised the international security risks posed by ransomware, cybercrime, AI, quantum tech, and cryptocurrencies. The document supports concepts like security-by-design and quantum cryptography, but doesn’t contain concrete measures. Commercial cyber intrusion tools (spyware) were flagged as threats to peace, though proposals for oversight were dropped. International law remains the only limit on tech use, mainly in conflict contexts. Critical infrastructure (CI), including fibre networks and satellites, was a focus, with cyberattacks on CI recognised as threats.

The central debate on norms focused on whether the final report should prioritise implementing existing voluntary norms or developing new ones. Western and like-minded states emphasised implementation and called for deferring decisions on new norms to the future permanent mechanism, while several developing countries supported this focus but highlighted capacity constraints. In contrast, another group of countries argued for continued work on new norms. Some delegations, such as sought a middle ground by supporting implementation while leaving space for future norm development. At the same time, the proposed Voluntary Checklist of Practical Actions received broad support. As a result, the Final Report softened language on additional norms, while the checklist was retained for continued discussion rather than adoption.

The states agreed to continue discussions on how international law applies to the states’ use of ICT in the future Global Mechanism, confirming that international law and particularly the UN Charter apply in cyberspace. The states also saw great value in exchanging national positions on the applicability of international law and called for increased capacity building efforts in this area to allow for meaningful participation of all states.

The agreement to establish a dedicated thematic group on capacity building stands out as a meaningful step, providing formal recognition of CB as a core pillar. Yet, substantive elements, particularly related to funding, were left unresolved. The UN-run Global ICT Security Cooperation and Capacity-Building Portal (GSCCP) will proceed through a modular, step-by-step development model, and roundtables will continue to promote coordination and information exchange. However, proposals for a UN Voluntary Fund and a fellowship program were deferred.

Prioritising the implementation of existing CBMs rather than adopting new ones crystallised during this last round of negotiation, despite some states’ push for additional commitments such as equitable ICT market access and standardised templates. Proposals lacking broad support—like Iran’s ICT market access CBM, the Secretariat’s template, and the inclusion of Norm J on vulnerability disclosure—were ultimately excluded or deferred for future consideration. 

States agreed on what the future Global mechanism will look like and how non-governmental stakeholders will participate in the mechanism. The Global mechanism will hold substantive plenary sessions once a year during each biennial cycle, work in two dedicated thematic groups (one on specific challenges, one on capacity building) that will allow for more in-depth discussions to build on the plenary’s work, and hold a review conference every five years. Relevant non-governmental organisations with ECOSOC status can be accredited to participate in the substantive plenary sessions and review conferences of the Global Mechanism, while other stakeholders would have to undergo an accreditation on a non-objection basis.

A detailed breakdown of discussions

Existing and potential threats: Conflict, crime, and cooperation 
 Weapon, Bow, Gun, Shooting

Discussions on emerging and existing threats reflected growing concern among states over the evolving complexity of the cyber threat landscape, with particular attention to ransomware, commercially available intrusion tools, and the misuse of AI and other emerging technologies. While there was broad recognition of new risks, debates emerged around how far the OEWG’s mandate should extend—especially regarding cybercrime, disinformation, and data governance—and how to balance security concerns with development priorities and international legal frameworks.

Promoting peaceful use of ICTs – or acknowledging the reality of cyber conflict?

One of the key tensions in the final OEWG discussions on emerging cyber threats was the clash between aspiration and reality—specifically, whether the report should promote the use of ICTs for exclusively peaceful purposes or instead focus on ensuring that their use, even in conflict, is constrained by international law.

Several countries argued that the time for idealistic appeals is over. ICTs are already being used in conflicts and hybrid operations, often below the threshold of armed conflict, combining cyber activities with other non-conventional tools of influence. These states (including the USA, Italy, El Salvador, and Brazil) emphasised that acknowledging this reality is essential to advancing responsible behaviour. Malicious cyber operations, often attributed to state-sponsored actors, have targeted critical civilian infrastructure and democratic institutions (as noted by Albania). 

Therefore, these countries pushed to remove or soften references to the exclusive peaceful use of ICTs. Their priority was to reassert that when ICTs are used, including in conflict contexts, their use must comply with international humanitarian law (IHL) and broader international law. In this context, there was also a call to reaffirm the obligation to protect civilians from harm during cyber operations in armed conflict—reflected in the Resolution on protecting civilians and other protected persons and objects against potential human costs of ICT activities during armed conflict, adopted by the 34th International Conference of the Red Cross and Red Crescent in October 2024 (referenced by Switzerland and Brazil).

On the other side, a group of states insisted on keeping strong language around the exclusive peaceful use of ICTs (such as Iran, Pakistan, Indonesia, Cuba, and China). They feared that weakening this reference could be interpreted as legitimising the use of force in cyberspace. While some of these countries acknowledged that ICTs have been used in conflict, they consider reaffirming the peaceful-use principle as a necessary political signal—a way to reinforce global norms and discourage militarisation of cyberspace. China, for example, pointed out that the principle of ‘exclusively peaceful purposes’ has long been part of the OEWG consensus and should remain as a shared aspiration.

Cybercrime and international security: A growing intersection?

Another divisive debate was whether cybercrime belongs in a process focused on international peace and security. A broad group of delegations—including the EU, USA, Canada, UK, Switzerland, Brazil, El Salvador, and Israel—argued that cybercrime has become part of this agenda too. They emphasised the growing role of criminal actors operating in alignment with state interests or from state territories with impunity. According to this group, the cybercriminal ecosystem—offering tools, malware, and even full-spectrum capabilities—is increasingly exploited by state-backed actors, blurring the lines between criminal activity and state behavior. Ignoring this overlap, they warned, would be negligent. 

In contrast, Russia, China, Iran, Cuba, Belarus, and several others opposed including cybercrime in the report. They insisted that criminal acts in cyberspace are distinct from those that threaten international peace and should remain within specialised forums such as the Ad Hoc Committee on Cybercrime. Equating the two, they argued, risks expanding the OEWG’s mandate beyond its intended scope.

Ransomware was one of the few specific threats that saw wide support for inclusion. Countries like the USA, Canada, the UK, Germany, the Netherlands, Brazil, Malawi, Croatia, Fiji, and Qatar stressed that ransomware poses a growing threat to national security and critical infrastructure, and requested that it be addressed with a dedicated paragraph in the Final Report. Several African states (including Nigeria on behalf of the African Group) noted its damaging impact on state institutions and regional bodies. Costa Rica pointed to the disruption of essential services, while Germany called for further discussion on applicable norms and legal frameworks, and Cameroon called for targeted capacity-building and cooperation—including through regional mechanisms like AFRIPOL. A human-centric approach was proposed by Malawi, Colombia, the Netherlands, and Fiji, while others (Russia, China) warned against overemphasising ransomware and argued it remains within the domain of cybercrime discussions.

A number of countries (Canada, the USA, Japan, the UK, Australia, South Korea, Malaysia, Qatar, and Pakistan) confirmed concerns about cryptocurrency theft and its role in financing malicious cyber operations, seeing this as a growing security issue. Others, notably Russia and Iran, pushed back, arguing that this—like cybercrime and other socioeconomic topics—falls outside the OEWG’s mandate.

Critical infrastructure: Shared concern, differing priorities

The protection of critical infrastructure (CI) and critical information infrastructure (CII) emerged as a shared concern in the OEWG discussions, especially for developing countries. Many states—particularly from Africa and the Pacific—highlighted how increased digitalisation and foreign investment in infrastructure have heightened their exposure to cyber threats. Malawi pointed to a breach in its passport issuance system in 2024, while Costa Rica recalled the crippling impact of cyberattacks on public services. For these states, safeguarding CI is not only a national security issue but essential for social and economic resilience.

Several delegations, including Croatia and Thailand, stressed the vulnerability of CI to criminal and advanced persistent threats (APTs). Croatia warned of non-state actors targeting weakly protected systems—the ‘low-hanging fruit’—especially in countries with limited defences, calling for capacity building that avoids deepening the gap between developed and developing countries. Thailand emphasised that APTs can severely disrupt essential services, with potentially cascading effects on national stability. The importance of tailored assistance to protect CI, including cross-border infrastructure like undersea cables, was echoed by the EU, the USA, the Pacific Islands Forum, and Malawi—underscoring the global stakes involved. Ghana and Fiji underlined that each state must determine for itself what qualifies as critical. Russia opposed listing specific sectors—like healthcare, energy, or finance—in the final text, arguing such references could imply a one-size-fits-all approach. Meanwhile, Israel proposed adding the word ‘malicious’ before ‘ICT attacks’ in the report—it was not explained, though, if there are non-malicious attacks, but an edit was ultimately accepted.

The EU and the USA also highlighted political risks, including threats to democratic institutions and electoral processes, while the USA raised concerns about pre-positioning of malware within CI by potential adversaries, though the lack of consensus kept this issue out of the final report. Still, the overall discussion reflected growing agreement that CI protection must be a core focus of future international cooperation, with stronger commitments and action-oriented measures.

Commercial intrusion tools: A market of growing concern

A particularly vivid discussion continued around the risks posed by the growing global market for commercial ICT intrusion capabilities, or spyware. Several delegations (the EU, the UK, South Korea) explicitly recognised this market as a growing threat to international security, but also to intellectual property (the EU). Ghana drew attention to the Pall Mall process—an initiative aimed at curbing irresponsible proliferation of such tools—as a complementary effort that should inform the OEWG’s work. Brazil and others emphasised the risk of irresponsible use, while Israel raised the issue of the ‘illegitimate dissemination’ of such tools—implicitly suggesting that their spread can sometimes be legitimate, depending on context. 

Debates intensified around conditions for lawful use. A range of countries (South Africa, Iran, France, Australia, Fiji, the UK) stressed that any use must be consistent with international law, legitimate and necessary, and—in some views—aligned with the UN framework on responsible state behaviour. 

However, Russia and Iran resisted tying the use of intrusion capabilities to the framework of responsible state behaviour, warning that this might make the framework seem legally binding and blur the line between voluntary norms and law. Israel further argued that when used in line with the UN framework, such tools should not be seen as threats to international peace.  Some states (South Africa, Australia, Pakistan, France) supported the idea of safeguards and oversight mechanisms, but others (Iran) noted these had not been fully discussed and could be addressed later. Meanwhile, Russia questioned whether the use of commercial intrusion tools for unauthorised access could ever truly align with international law, proposing to delete such references entirely.

Emerging technologies: Risks vs opportunities

Debates around emerging technologies reflected a split between states advocating for proactive recognition of risks and those cautioning against overemphasis. Many countries—especially from the Global South (Indonesia, Qatar, Singapore, Thailand, Colombia, Fiji, the African Group)—called for attention to the security implications of AI, IoT, cloud computing, and quantum technologies. They highlighted the dual-use nature of these tools, particularly AI-generated malware, deepfakes, and synthetic content, and stressed that such technologies are already being misused in ways that could threaten international peace (as noted by Indonesia and Mauritius). In contrast, tech-leading states like the USA and Israel warned against placing disproportionate focus on risks, arguing it could overshadow opportunities. The EU, meanwhile, urged caution to avoid duplicating work done in other forums, particularly on AI.

In practical terms, many states (Canada, UK, El Salvador, Pakistan) supported the deployment of post-quantum cryptographic solutions, though others (Russia) considered such steps premature. There was also strong support (UK, Canada, Malaysia, Qatar, Fiji) for naming specific emerging infrastructures—like 5G, IoT, VPNs, routers, and even data centres and managed service providers—as relevant to security discussions. Malaysia highlighted the need for changing the language related to technologies to terms that are also understandable to technical communities – a useful reminder that these processes shouldn’t be left to diplomats alone. Still, some states (Russia, the USA, Israel) pushed to streamline or remove these references, citing concerns over technical detail and the need for broader consensus. The question of whether technologies are neutral sparked philosophical disagreement—Cuba and Nicaragua said no; Switzerland reminded that the agreed language in the third APR from 2024 (par.22) says yes.

New emphasis: Data, disinformation, and supply chain security

The growing strategic importance of data governance was emphasised by several states. Türkiye called for stronger protections around cross-border data flows, personal data, and mechanisms to prevent the misuse of sensitive information, highlighting the need to integrate data security into broader risk management frameworks. Mauritius linked data and responsible innovation, while China reiterated its long-standing proposal for a global data security initiative that could guide international cooperation in this domain.

Disinformation—particularly the use of deepfakes and manipulated content to destabilise institutions—was raised as an urgent and evolving threat. The African Group, represented by Nigeria, emphasised its damaging impact on post-conflict recovery and political transitions, especially in fragile states. Egypt echoed this concern, warning that misinformation campaigns disproportionately affect developing countries, increasing their risk of relapse into instability. China added concerns about the politicisation of disinformation, especially in the context of attributing cyber incidents.

On supply chain security, states agreed about the importance of adopting a security-by-design approach throughout the ICT lifecycle. The proponent, Ghana – supported by Colombia, the UK, and Fiji – stressed this as a baseline measure to address vulnerabilities. Türkiye added that global standards and best practices must be matched by practical implementation frameworks that consider varying national capacities and promote trust across jurisdictions.

Partnerships and cooperation: Making cybersecurity work in practice

The OEWG discussions underscored strong support for enhancing public-private partnerships (PPP) and the role of CERT-to-CERT cooperation as practical tools in addressing cyber threats. A wide range of states—the EU, Canada, Indonesia, Ghana, Singapore, Malawi, Malaysia, Fiji, and Colombia—welcomed explicit recognition of these mechanisms. Several countries (e.g. Mauritius, Thailand) stressed the growing importance of cross-regional cooperation, particularly as cyber threats increasingly affect privately owned infrastructure and cross-border systems. The EU called for greater multidisciplinary dialogue among technical, legal, and diplomatic experts.

Switzerland and Colombia emphasised the role of regional organisations as facilitators for implementing the global framework. Singapore offered the newly established ASEAN regional CERT and information-sharing mechanism as a model. 

While many acknowledged the expanding role of the private sector, Türkiye noted that its responsibilities remain insufficiently defined, suggesting further dialogue is needed to clarify how private actors can contribute to addressing systemic vulnerabilities and managing major incidents. Türkiye also suggested the UN Technology Bank to support cybersecurity capacity building for least developed countries (LDCs) as part of broader digital transformation efforts and promoting secure digital development.

The outcomes

The final document reflects several negotiated compromises. The aspiration to promote ICTs for exclusively peaceful purposes was softened by removing ‘exclusively,’ while a new reference acknowledges the need to use ICTs in a manner consistent with international law (para. 15). Criminal activities ‘could potentially’ impact international peace and security (para. 16). A specific list of critical infrastructure was removed, but protection of cross-border CI is newly emphasized (para. 17), along with the inclusion of security-by-design in the context of vulnerabilities and supply chains (para. 23). Ransomware remains mentioned (para. 24), though a dedicated paragraph was not added. Concerns over commercially available intrusion tools are retained, calling for ‘meaningful action’ and use consistent with international law (para. 25). Risks from emerging technologies are underlined with adjusted specific terminology (para. 20), while the paragraph on AI and quantum (para. 26) was shortened, though still references LLMs and quantum cryptography. A previous reference stating that ICT use ‘in a manner inconsistent with the framework … undermines international peace and security, trust and stability’ was removed.

Norms: Implementing existing ones or developing new ones
 Body Part, Hand, Person, Handshake, Animal, Dinosaur, Reptile

The central debate, as it was at earlier sessions, revolved around whether the OEWG should prioritise developing new norms or focus on implementing the agreed voluntary, non-binding norms. The Voluntary Checklist of Pratical Actions was also discussed.

Implementation and operationalisation: The priority for many

Many Western and like-minded states stressed the implementation of norms. In particular, the Republic of Korea underlined the importance of focusing on implementing and operationalising existing norms rather than creating new ones. The USA, the Netherlands, Canada, and others expressed concern about placing undue emphasis on developing additional norms and advocated for removing paragraphs 34R and 36 of Rev.1. The EU maintained that decisions on developing new norms should be left to the future permanent mechanism, and called for more attention to norms implementation and capacity building.

Several developing countries supported this focus but noted capacity constraints. Fiji, speaking on behalf of the Pacific Islands Forum, noted the different stages of norms operationalisation among members and cautioned against moving forward with new norms without consensus or a clear gap analysis. Ghana welcomed a whole-of-government approach to the implementation, but also stressed the need to raise awareness of these norms at the national level. 

Work on new norms: A red line for some

In contrast, another group of states advocated for continued work on new norms. Russia argued there was a biased reflection favouring norms implementation and insisted on language supporting the development of legally binding measures, highlighting the initially agreed mandate for the UN OEWG. Iran warned that removing subparagraphs in paragraph 34 as well as paragraph 36 would undermine the section’s overall balance. 

China called for a balance between norms and international law and proposed to delete paragraph 34H, arguing it was not balanced as it focused only on non-state actors and commercially available ICT intrusion capabilities while ignoring states as the major source of threat. China noted that countries that support the retention of paragraph 34H are countries that are opposing the creation of new norms, also commenting on perceived inconsistency among those opposing the creation of new norms while advocating for implementation. In the final report, the wording was adjusted (in paragraph 34F) to reference both state and non-state actors. 

Walking the middle path on norms development

In the meantime, some countries attempted to take the middle ground. Singapore supported implementing existing norms while leaving space for new ones, noting that implementation is necessary to understand what new norms are needed. Indonesia expressed a similar view.

Voluntary Checklist of Practical Actions: Deferred 

The Voluntary Checklist of Practical Actions received broad support with some exceptions. While the UK called it a valuable output of the OEWG, and Ireland described it as an effective capacity-building tool, Russia and Iran opposed its adoption as it was formulated in paragraph 37 of Rev. 1, arguing it had not been fully discussed and should be deferred to the future mechanism.

At the same time, some additional proposals were shared, for example, Cameroon called for a working group on accountability for attacks on critical health infrastructure, while China reminded of the data security initiative and broader data security measures.

The outcome

In the Final Report, paragraph 34 and its subparagraphs were significantly condensed. Detailed proposals in Rev.1 were reduced to a shorter list (34a–h). Technical specifics, such as templates and gender considerations, were simplified or removed. While Rev.1 stated that developing new norms and implementing existing ones were not mutually exclusive and recommended compiling and circulating a non-exhaustive list of proposals in this context, the Final Report significantly softened this language. It retained the idea that additional norms could emerge in paragraph 36d but excluded it from recommendations. The checklist, initially proposed for adoption, has been reworded and is now for continued discussion (Recommendation 38 in the Final report).

International law: Deep divisions shape a limited consensus
 Boat, Transportation, Vehicle, Chandelier, Lamp, Scale

The international law section of the Final report reflects the prevailing splits between the states on the need for new binding norms, the applicability of international human rights law and humanitarian law, resulting in a consensus text that fails to reflect the depth and richness of discussions on international law in the past five years. 

The UN Charter: Applicability reaffirmed

Looking in detail, states reaffirmed that international law, in particular the UN Charter applies is applicable and essential to maintaining peace, security and stability and promoting an open, secure, stable, accessible and peaceful ICT environment. Building on the previous work captured in the Annual Reports, the states reaffirmed principles of state sovereignty and sovereign equality (based on the territorial principle), as well as Art. 2(3) and Art. 33(1) of the UN Charter on the pacific settlement of disputes. The reference to Art. 33 (1) has been included in the text despite the request of Iran to remove it, as in their opinion, it lacks consensus and reflects divergence between states.  

Further, the states reaffirmed the Art 2 (4) of the UN Charter on the prohibition of the threat or use of force and the principle of non-intervention. The definition of what may constitute the use of force from Zero Draft (‘An ICT operation may constitute a use of force when its scale and effects are comparable to non-ICT operations rising to the level of a use of force’) supported by the EU, Finland, Italy, Netherlands, Korea, United Kingdom, Australia, and others was taken out, ceding to the requests of Russia, Cuba, Iran, and others.

IHRL and IL: Contentious and omitted 

While the Final report states that the discussions on international law deepened, two topics have not found their place in the text – international human rights law and international humanitarian law. Despite the strong push by the EU, Australia, Switzerland, France, Chile, Colombia, the Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Fiji, Kiribati, Moldova, the Netherlands, Papua New Guinea, Thailand, Vanuatu, Uruguay, Vietnam, Japan, Nigeria on behalf of the African Group and many others who supported the inclusion of references to the applicability of international human rights law and humanitarian law as part of the consensus in the Final report. Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Senegal, Sweden, and Switzerland provided statements that referred explicitly to the applicability of international humanitarian law and its principles to be included in the Final Report. Many have mentioned the depth of work in this area, as well as the Resolution on Protection of Civilians of the 34th Conference of the Red Cross and Red Crescent Movement, a consensus document. On the other hand, Russia considered that the work on the protection of civilians was not consensus-based, and Belarus, Venezuela, Burkina Faso, the Democratic People’s Republic of Korea, Iran, China, Cuba, Nicaragua, Russia, and Eritrea considered the applicability of international humanitarian law a contentious topic on which there is a clear disagreement.

Additional binding obligations: The door is open

The Final Report keeps the door open for discussions on the possibility of future elaboration of additional binding obligations, if appropriate, and the development of additional legally-binding obligations. In its statement on the Final Report, Russia is already pushing for the Global Mechanism to focus, among other issues, on developing new legally binding norms in the field of digital security. 

What’s missing?

The Final Report does not include references to a variety of resources that could have been the basis for discussions in the future process, from the above mentioned ICRC report, to the Common African Position, the Declaration by the European Union and its member states on a Common Understanding of the Application of International Law to Cyberspace, Updated concept for a convention of the UN on ensuring international information security (by Belarus, the Democratic People’s Republic of Korea, Nicaragua, Russia and Syria), as well as Working Paper on the Application of international humanitarian law to the use of information and communication technologies in situations of armed conflicts by Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Senegal, Sweden and Switzerland and the working paper Working Paper on the application of international law in the use of ICTs: areas of convergence outlining proposed text for inclusion in the 2025 Final Report international law section by Australia, Chile, Colombia, the Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Fiji, Germany, Kiribati, Moldova, the Netherlands, Papua New Guinea, Romania, Thailand, Uruguay, Vanuatu, and Viet Nam.

The bottom line

The recommendations for the Global Mechanism in relation to the subject matter of international law reiterate further discussions on how international law applies, pushing the divides in this area into the future. The main achievement in the international law section, according to the Final Report, is the voluntary exchanges of national positions and the commitment to increased capacity building in this area, which was highlighted by the small and developing countries.

Capacity building: A fractured path to operationalisation
 Art, Drawing, Doodle, Crib, Furniture, Infant Bed

Echoing previous sessions, there was broad recognition of capacity building’s foundational role in implementing norms, fostering international legal dialogue, and reinforcing confidence-building measures. Yet, as the final OEWG session unfolded, this familiar consensus was accompanied by a renewed urgency to move beyond conceptual alignment. Action-oriented capacity building became a recurring buzzword, capturing the shared ambition to shift from declaratory commitments toward concrete, needs-based mechanisms. This convergence created early momentum for advancing capacity building structures. Still, despite alignment on principles, the pathway to operationalisation remained fractured along critical lines.

What role for the UN?

During negotiations, two opposing positions reflected fundamentally different priorities: Western states emphasised flexibility and minimal commitments, while many developing countries viewed the early operationalisation of capacity building as essential to anchoring the future mechanism in tangible delivery and ensuring it addresses the digital divide. At one end of the spectrum, the USA opposed all new CB mechanisms and rejected any operational role for the UN, citing its ongoing financial crisis. France and Canada adopted a more cautious stance, advocating a step-by-step approach centred on mature initiatives and warning against the premature creation of new structures. 

In contrast, countries such as Nigeria (on behalf of the African Group), Tunisia (on behalf of the Arab Group), Brazil, Iran, and Egypt called for a more active UN role, supported by predictable and well-resourced mechanisms, including calls to include more concrete language on the operationalisation of a UN Voluntary Fund. Consistent with this approach, the African Group, Latin American states, and others backed the creation of a Dedicated Thematic Group (DTG) on CB within the permanent mechanism to ensure coordination, needs mapping, implementation tracking, and inclusive participation, functions they feared would be sidelined if CB remained a merely cross-cutting issue. The USA and Canada opposed this, arguing that issue-specific groups risked bureaucratic redundancy and inefficiency.

The outcome

The final outcome reflects a carefully negotiated compromise: it advances the institutional scaffolding of the future mechanism but falls short of the ambitions expressed by many developing states. The agreement to establish a DTG on capacity building stands out as a meaningful step, providing formal recognition of CB as a core pillar. 

Yet, substantive elements, particularly related to funding, were left unresolved. The UN-run Global ICT Security Cooperation and Capacity-Building Portal (GSCCP) will proceed through a modular, step-by-step development model, and roundtables will continue to promote coordination and information exchange. However, proposals for a UN Voluntary Fund and a fellowship program were deferred, with references downgraded to non-binding language and postponed for further consideration. 

While the framework reflects principles of gradualism and inclusivity, it also exposes the limits of consensus: Western states succeeded in prioritising flexibility and minimal commitments, while developing countries, especially those from the Arab and African Groups, voiced frustration that the outcome lacked the concrete, adequately resourced mechanisms needed to close enduring digital divides. Without progress on predictable funding and operational tools, they warned, the credibility and effectiveness of the DGT group on CB risks would be undermined from the outset.

Confidence-building measures (CBMs): A subdued discussion
 Weapon, Animal, Kangaroo, Mammal, Text

CBMs have been one of the main areas of progress in recent years within the OEWG process. However, the discussions during the most recent session were notably subdued. 

New CBMs: Overcommitting or not?

A few new proposals were tabled. Indeed, a clear—and by now long-standing—consensus has emerged among several delegations, including the EU, Canada, the Netherlands, Ukraine, New Zealand, Australia, and the USA, that the OEWG’s final report should avoid overcommitting to new CBMs.

This position was the principal counterpoint to Iran’s longstanding proposal for a new CBM aimed at ensuring unhindered access to a secure ICT market for all states. Although this proposal did not gain significant traction in earlier discussions, it became a central point of contention during the latest round of negotiations. States such as Brazil and El Salvador expressed support for retaining this reference, but others—including the Netherlands, the USA, New Zealand, Australia, and Switzerland—firmly rejected its inclusion, citing both the absence of consensus and the need to prioritise the implementation of the eight CBMs agreed under the OEWG framework. Switzerland proposed relocating this reference to the capacity-building section, where states could voluntarily provide others with ICT tools to strengthen capacity. 

The standardised template for communication: First time discussed in the plenary

First circulated in April 2025, the standardised template developed by the Secretariat had not yet been discussed in plenary. Some delegations—notably Qatar and the Republic of Korea—expressed their preference to keep the template flexible and voluntary. Thailand proposed enhancing the template by incorporating elements such as urgency and confidentiality to help states identify operational needs in sensitive contexts. Nevertheless, the proposal received a lukewarm reception from the EU and the Netherlands, with the latter calling for its removal from the final report. 

Responsible reporting of ICT vulnerabilities, norm J)

A final point of contention that was excluded from the final report concerned the inclusion of norm J), which pertains to the responsible reporting of ICT vulnerabilities, under the CBM section. While El Salvador supported its inclusion, the Netherlands, the EU, and Israel strongly opposed this characterisation. The Netherlands questioned the logic of singling out this particular norm over others, while Israel argued that this issue had not been substantively deliberated and therefore should not appear under the CBM heading.

The result

While Iran’s proposal did not make it onto the formal list of CBMs, it remains referenced in the final report for potential consideration within the future permanent mechanism. Although it was initially the Chair’s ambition to include the standardised template of communication, it ultimately was not retained. Norm J) was not included in the CBMs section.

Regular institutional dialogue: Framing the future
 Accessories, Sunglasses, Glasses, Earring, Jewelry, Text

Thematic groups: Debating the design

One of the most significant debates during the session centred on the thematic groups to be established under the future mechanism. These groups were originally conceived as a means to allow delegations to deepen discussions on key issues. However, countries quickly ran into a stumbling block: how many thematic groups should there be, and what topics should they cover? While views varied, the vast majority of states, as well as the Chair, agreed that this was a matter that had to be resolved during this final substantive session of the OEWG. Deferring the decision to the future global mechanism, they warned, would risk unnecessary delays in getting the new process off the ground.

Zero Draft: The starting point for negotiations

Chair’s Zero Draft proposal was the basis for the beginning of discussions on this issue. His initial proposal was 3 DTGs:

  • The first would focus on action-oriented measures to enhance state resilience and ICT security, protecting critical infrastructure, and promoting cooperative action to address threats in the ICT environment. (DTG1)
  • The second group would continue the discussions on how international law applies to the use of ICTs in the context of international security. (DTG2)
  • The third group would address capacity-building in the use of ICTs, with an emphasis on accelerating practical support and convening the Global Roundtable on ICT security capacity-building on a regular basis. (DTG3)

This proposal is what the states discussed Monday through Wednesday. A number of states, for instance, Nigeria, Senegal, South Africa, Thailand, Colombia, Cote d’Ivoire, Indonesia, Brazil, El Salvador, Botswana, expressed support for the creation of the three proposed DTGs. Some countries suggested minor changes, for example, Indonesia suggested that DTG1 can be streamlined to resilience and ICT security of states. South Africa suggested that clearly showing how time will be divided among the group’s workstreams in the illustrative timeline would be very helpful.

However, a number of countries were against DTG1. Nicaragua noted that the scope and approach of DTG1 are not clear, and that greater discussion is needed. Iran similarly noted that the mandate of DTG1 remains vague and overly complex and therefore requires further strengthening and clarification in line with the pillars of the OEWG. China cited the use of vague terms like ‘resilience’ that could undermine the OEWG’s agreed framework. Russia cautioned that the discussion of the three pillars of the mandate within the same group may be challenging. Russia also stated that norms and CBMs deserve separate groups. Nicaragua suggested establishing a separate thematic group on norms. South Africa was in favour of a DTG2 that would discuss norms in addition to international law. Belarus suggested a thematic group on standards and on CBMs.

DTG2 was much debated. A number of countries were in favour, for various reasons. For instance, Switzerland and Mauritius noted that such a group should discuss how existing international law applies in cyberspace. Mexico highlighted that states need to have a permanent space in which to review, when appropriate, their compatibility with the existing legal framework. Thailand noted that this group will enable focused and sustained discussion, including on related capacity building, aimed at bridging legal and technical gaps and promoting more inclusive participation by states on this specialised topic. On the other hand, Zimbabwe noted that the DTG could help elaborate a comprehensive legal instrument to codify the applicable rules and principles governing state conduct in cyberspace. 

However, various reasons against establishing DTG2 were also brought up. The EU emphasised that the OEWG’s five pillars are interdependent, and isolating one—such as international law—risks siloed, incoherent outcomes. Australia, Romania and Estonia echoed this view, arguing that international law should be addressed through cross-cutting DTGs. In China’s view, DTG 2 undermines the balance between norms and international law. 

The USA opposed DTG2, citing that some states have already affirmed that they will seek to use conversations in the international law DTG to advance new legally binding obligations contrary to the consensus spirit of the OEWG.

However, seemingly in response, Egypt stated that states should not preempt the discussions in DTGs. It stressed that the groups are intended for open dialogue, as has been the practice over the past four years, without any predetermined conclusions. Egypt underlined that, according to Paragraph 15 of the OEWG report, any recommendations emerging from the DTGs will remain draft and subject to consensus-based decision-making.

Much support was expressed for DTG3. Nigeria, on behalf of the African Group, said the group would offer a focused platform to strengthen developing countries and bridge the digital gap. Paraguay supported a specialised working group to facilitate national efforts in policy development and information exchange. Mexico emphasised that the DTG could help develop action-oriented recommendations, map needs and resources, follow up on implementation, coordinate with the global roundtable, and promote diversity and inclusion. El Salvador highlighted the importance of the DTG for Central America, noting it should not be limited to financing but also cover technical assistance and knowledge exchange. Botswana noted that the DTG will assist states in organising national cybersecurity efforts, developing policy frameworks, protecting critical and information infrastructures, implementing existing voluntary norms, and formulating national positions on the applicability of international law in cyberspace. Uruguay noted that DTG would go beyond training to identify specific needs and ensure targeted support, allowing for a more comprehensive approach to capacity building.

Indonesia said the group should focus on CBMs, technical training, capacity needs of developing countries, and strengthening initiatives like the Global PoC Directory and the new Global ICT Security Cooperation and Capacity Building Portal. South Africa suggested that discussions on CBMs could be placed under this DTG instead of DTG1, if states agreed. 

France’s detailed proposal was highly regarded by many delegations, such as Australia, the USA, Finland, Switzerland, Italy, South Korea, Denmark, Japan, Canada, Sweden, Romania, and Estonia. This proposal, regarded as an honest bridging proposal, suggested three thematic groups, which would draw on the pillars of the framework for responsible State behaviour in the use of ICT. They would consider, in an integrated, policy-oriented and cross-cutting manner, action-oriented measures to:

  • Increase the resilience and ICT security of states, including the protection of critical infrastructure, with a focus on capacity-building in the use of ICTs in the context of international security, and to convene the dedicated Global Roundtable on ICT security capacity-building (DTG1)
  • Enhance concrete actions and cooperative measures to address ICT threats and to promote an open, secure, stable, accessible and peaceful ICT environment, including to continue the further development and operationalisation of the Global POC Directory (DTG2)
  • Promote maintaining peace, security and stability in the ICT environment (DTG3)

Australia noted that the proposal explicitly draws on the five pillars of the framework in each dedicated thematic group. Australia, the USA, Switzerland, and Estonia noted that the proposal is action-oriented. Per South Korea, the proposal would allow for more practical and integrated discussion. 

Rev 2: Down to DTG1 and DTG2

However, the Chair’s Rev2 brought significant changes to DTGs. It suggested:

  • An integrated, policy-oriented and cross-cutting dedicated thematic group drawing on the five pillars of the framework to address specific challenges in the sphere of ICT security in the context of international security in order to promote an open, secure, stable, accessible, peaceful, and interoperable ICT environment, with the participation of, inter alia, technical experts and other stakeholders. (DTG 1) 
  • An integrated, policy-oriented and cross-cutting dedicated thematic group drawing on the five pillars of the framework to accelerate the delivery of ICT security capacity-building, with the participation of, inter alia, capacity-building experts, practitioners, and other stakeholders. (DTG 2)

DTG1 was not met with much enthusiasm. Ghana noted that the DTG1 lacks clarity on how the various focus areas will be discussed and effectively distributed within the allocated time frame. Russia also noted that it is unclear what exactly the group will work on. Nicaragua noted that the group’s scope is overstretched, while El Salvador warned against excessive generalisation of discussions. Nicaragua and Russia noted the risks of duplication of discussions in the DTG1 and the plenary sessions. France and the USA regretted the removal of language around cooperation, resilience, and stability.

Delegations made a few suggestions to improve DTG1. Canada called for clearer language and a focus on critical infrastructure. Ghana suggested that either a clearer framework for the internal distribution of time among the focus areas be established, or the OEWG revert to the three DTGs suggested in Rev1. Nicaragua suggested that the OEWG establish the DTG2 on capacity building and defer the decision on other possible DTGs to the organisational session of the future permanent mechanism in March 2026. 

A small number of countries, namely Indonesia, Turkiye, the Philippines, Ukraine, and Pakistan, accepted the new DTG1 as outlined in Rev 2. 

A number of countries expressed regret at the removal of the DTG on international law. Among them were Nigeria on behalf of the African Group, Egypt, Colombia, El Salvador, Russia, Brazil, and Mauritius. However, this group did not make it into the Final report. Brazil, for instance, noted that it will be difficult to ensure the meaningful participation of legal experts when the issue of international law is so diluted in DTG 1’s overly broad mandate. Egypt stated that the group on international law, along with the group on capacity building, were the source of balance vis-a-vis DTG1 and its everything, everywhere, all at once approach. Tunisia, on behalf of the Arab Group, noted that it will ask the chair of the mechanism to hold a conference on the application of international law, while Egypt was in favour of a roundtable. 

DTG2 on capacity building, which was widely supported as DTG3 while countries were still discussing Rev1, wasn’t much discussed as it seemed countries were in favour of establishing it. Canada called for a clear link and no duplication between the global roundtable on capacity building on capacity building and DTG2. France and Australia suggested that DTG2 be responsible for organising the global roundtable on capacity building as well as its follow-up.  Costa Rica emphasised the need to include more operational detail, such as identifying, planning, and implementing capacity building, as well as improving the connection between providers and recipients. However, Egypt stressed that without concrete steps—such as establishing a UN-led capacity building vehicle, activating the Voluntary Fund and Sponsorship Program, and ensuring predictable resources—the DTG2 discussions would fall short of their potential and risk undermining the credibility of the new mechanism.

Additional ad hoc groups

Thailand, Côte d’Ivoire, South Africa, and Colombia supported the idea of creating additional ad hoc dedicated thematic groups with a fixed duration to engage in focused discussions on specific issues as necessary, while Iran noted that such groups must be created by consensus. Australia opposed ad hoc groups, noting that they could create additional uncertainties and potential burdens for smaller delegations. 

Multistakeholder engagement in UN cyber dialogue: An old issue persistently on the agenda

Should a state be able to object to an MSH participating in the OEWG? Opinions are divided.

Answer A: Yes, the principle of non-objection must be observed

A group of states is saying YES. Türkiye, Iran, Nigeria on behalf of the African Group, China, Zimbabwe, Nicaragua, Tunisia on behalf of the Arab Group, Indonesia, Egypt, Nicaragua, Russia, and Cuba advocated for keeping the current modalities of stakeholder engagement. Per these modalities, ECOSOC-accredited stakeholders may attend formal OEWG meetings without addressing them, speak during a dedicated stakeholder session, and submit written inputs for the OEWG website. Other relevant stakeholders may also apply by providing information on their purpose and activities; they may be invited to participate as observers, subject to a non-objection process. A state may object to the accreditation of specific non-ECOSOC-accredited organisations, and must notify the OEWG Chair that it is objecting. The state may, on a voluntary basis, share with the Chair the general basis of its objections.

Iran supported the proposal made by Russia during the town hall consultations to empower the chair and the secretariat of the future permanent mechanism to assess the relevance of ECOSOC-accredited NGOs that have applied to participate in the mechanism and to inform the state of the outcome of such assessment. Egypt stated that it does not see merits in the additional consultative layers that will overload the chairperson of the future permanent mechanism without necessarily resolving any potential divergence of views.

China questioned the push for increased NGO participation when member state concerns remain unresolved and highlighted the issue of inappropriate remarks by states, raising doubts about ensuring appropriate NGO contributions.

This group of states does not want experts participating in DTGs. Russia and Nicaragua noted that the DTGs are to provide a platform for dialogue, specifically for government experts. Iran stated that, given that technical experts from states will participate in the thematic groups and will engage in technical rather than political or diplomatic discussions, the expert briefings, as well as the participation of other stakeholders in DTGs, don’t offer additional value and could therefore be deleted. 

Answer B: No, multistakeholder participation cannot be limited

Their much different position is outlined in the paper titled ‘Practical Modalities for Stakeholders’ Participation and Accreditation Future UN Mechanism on Cybersecurity,’ co-ordinated by Chile and Canada and supported by 42 states. 

This group notes that a state may object to the accreditation of specific non-ECOSOC-accredited organisations. However, the notice of intention to object shall be made in writing and include, separately for each organisation, a detailed rationale for such objection(s). One week after the objection period ends, the Secretariat will publish two lists: one of accredited organisations and another of those with objections, including the objecting state(s) and their reasons. These lists will be made public. At the next substantive plenary session, any state that filed an objection may formally oppose the accreditation. If the Chair considers that every effort to reach an agreement by consensus has been exhausted, a majority vote of members present and voting may be held to decide on the contested accreditations, following the Rules of Procedure of the UN General Assembly.

This group has also proposed broader participation rights for stakeholders in the future mechanism. Their proposal includes:

  • Allowing stakeholders to deliver oral statements and participate remotely in plenary sessions, thematic groups, and review sessions.
  • Permitting non-accredited stakeholders to attend plenary sessions silently.
  • Granting the Chair (or Vice Chairs) the authority to organise technical briefings by stakeholders and states during key sessions, ensuring geographic balance and gender parity, and fostering two-way interaction.
  • Enabling Chairs (or Vice Chairs) of thematic groups to invite stakeholders to submit written reports, give presentations, and provide other forms of support.

The proposal, its proponents believe, is a fair and practical way to enhance stakeholder participation in the future mechanism by promoting transparency and inclusiveness.

Answer C: Yes, but!

The Chair’s proposal tried to bridge these two positions. If a member state objects to accrediting a stakeholder, it must inform the Chair and may voluntarily share the general reason for the objection. The Chair will then consult informally with all member states for up to three months to try to resolve the concern and facilitate accreditation. After the consultations, if a consensus has been reached, the Chair may propose to the Global Mechanism to confirm the accreditation. If consensus is not yet possible, the Chair will continue informal consultations as appropriate. Therefore, this proposal contains the principle of objection, but that can also be revoked.

Accredited stakeholders will be able to attend key sessions, submit written inputs, and deliver oral statements during dedicated stakeholder sessions. They may also speak after member states at substantive plenary sessions and review conferences, time permitting and at the Chair’s discretion. The Chair will also hold informal or virtual meetings with stakeholders during intersessional periods. Participation is consultative only—stakeholders would engage in a technical and objective manner, and their contributions ‘shall remain apolitical in nature’. Negotiation and decision-making are exclusive prerogatives of member states.

What’s in a name?

Towards the end of the session, another disagreement popped up: the future permanent mechanism’s very name.

While France suggested that the future mechanism should ‘advance responsible state behavior’, a proposal that had quite some proponents, Iran and Russia, for instance, insisted on using ‘security of and in the use of ICT’, terminology used in the OEWG’s name. 

The outcomes

The final report confirms the establishment of DTG 1 on specific challenges and DTG 2 on capacity building, as outlined in Rev2. The final report acknowledges the possibility of establishing additional ad-hoc dedicated thematic groups. 

The Chair’s proposed modalities were adopted as part of the Final report. Nicaragua, Belarus, Venezuela, China, Cuba, Eritrea, Iran, Niger, Russia, Sudan, and Zimbabwe welcomed that accredited stakeholders will participate on a non-objection basis and obtain a solely consultative status, highlighting that the future permanent mechanism is strictly an intergovernmental process. 

This division on names resulted in the rather unwieldy name of the future permanent mechanism: ‘Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible State behaviour in the use of ICTs’.

Next steps

The OEWG wrapped up its work on 11 July, but there is still work to be done before the Global Mechanism actually kicks off. Singapore will table a simple draft resolution in the First Committee to endorse the OEWG’s final report and enable its formal approval by the General Assembly and the Fifth Committee. Emphasising that the resolution should be seen as procedural, not an opportunity to reopen debates, the Chair urged delegations to support a single, unified resolution on ICT security, in line with the agreed single-track process. The organisational session of the Global Mechanism should be held no later than March 2026.

Mark your calendars!

 Text, Advertisement, Poster, Paper

On 23 July, Diplo will host a webinar titled ‘Five years on: Achievements, failures, and the future of the UN Cyber Dialogue’ to explore the OEWG’s achievements in advancing common understandings among states on responsible behaviour in cyberspace, challenges encountered in bridging diverse national positions and operationalising agreed norms, as well as provide an overall look at the process since 2021. Register for the event on the dedicated web page.


UN OEWG proccess


No judges, no appeals, no fairness: Wimbledon 2025 shows what happens when AI takes over

One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.

While the two new champions will remember this tournament for a lifetime, Wimbledon 2025 will also be recalled for another reason: the organisers’ decision to hand over crucial match decisions to AI-powered systems.

The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.

Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.

Wimbledon

Wimbledon as the turning point for AI in sport

The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.

Rather than a triumph of innovation, the tournament became a cautionary tale.

During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.

Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.

Athletes speak out when technology silences the human

Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.

Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.

Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.

Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.

Other sports, same problem: When AI misses the mark

Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.

In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.

Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.

Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.

Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.

From umpire to overseer: When AI watches everything

The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.

Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.

Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.

Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.

The AI paradox: Trust without understanding

The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.

At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.

Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.

That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.

Dependency over judgement and the cost of trusting machines

The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.

What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.

Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.

Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.

A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.

Let AI enhance the game. Let humans keep it human.

 Person, Clothing, Footwear, Shoe, Playing Tennis, Racket, Sport, Tennis, Tennis Racket

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The cognitive cost of AI: Balancing assistance and awareness

The double-edged sword of AI assistance

The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI has become a ubiquitous companion, helping students with essays and professionals streamline emails.

However, a new study by MIT raises a crucial red flag: excessive reliance on AI may come at the cost of our own mental sharpness. Researchers discovered that frequent ChatGPT users showed significantly lower brain activity, particularly in areas tied to critical thinking and creativity.

The study introduces a concept dubbed ‘cognitive debt,’ a reminder that while AI offers convenience, it may undermine our cognitive resilience if not used responsibly.

MIT’s method: How the study was conducted

The MIT Media Lab study involved 54 participants split into three groups: one used ChatGPT, another used traditional search engines, and the third completed tasks unaided. Participants were assigned writing exercises over multiple sessions while their brain activity was tracked using electroencephalography (EEG).

That method allowed scientists to measure changes in alpha and beta waves, indicators of mental effort. The findings revealed a striking pattern: those who depended on ChatGPT demonstrated the lowest brain activity, especially in the frontal cortex, where high-level reasoning and creativity originate.

Diminished mental engagement and memory recall

One of the most alarming outcomes of the study was the cognitive disengagement observed in AI users. Not only did they show reduced brainwave activity, but they also struggled with short-term memory.

Many could not recall what they had written just minutes earlier because the AI had done most of the cognitive heavy lifting. This detachment from the creative process meant that users were no longer actively constructing ideas or arguments but passively accepting the machine-generated output.

The result? A diminished sense of authorship and ownership over one’s own work.

Homogenised output: The erosion of creativity

The study also noted a tendency for AI-generated content to appear more uniform and less original. While ChatGPT can produce grammatically sound and coherent text, it often lacks the personal flair, nuance, and originality that come from genuine human expression.

Essays written with AI assistance were found to be more homogenised, lacking distinct voice and perspective. This raises concerns, especially in academic and creative fields, where originality and critical thinking are fundamental.

The overuse of AI could subtly condition users to accept ‘good enough’ content, weakening their creative instincts over time.

The concept of cognitive debt

‘Cognitive debt’ refers to the mental atrophy that can result from outsourcing too much thinking to AI. Like financial debt, this form of cognitive laziness builds over time and eventually demands repayment, often in the form of diminished skills when the tool is no longer available.

Typing

Participants who became accustomed to using AI found it more challenging to write without it later on. The reliance suggests that continuous use without active mental engagement can erode our capacity to think deeply, form complex arguments, and solve problems independently.

A glimmer of hope: Responsible AI use

Despite these findings, the study offers hope. Participants who started tasks without AI and only later integrated it showed significantly better cognitive performance.

That implies that when AI is used as a complementary tool rather than a replacement, it can support learning and enhance productivity. By encouraging users to first engage with the problem and then use AI to refine or expand their ideas, we can strike a healthy balance between efficiency and mental effort.

Rather than abstinence, responsible usage is the key to retaining our cognitive edge.

Use it or lose it

The MIT study underscores a critical reality of our AI-driven era: while tools like ChatGPT can boost productivity, they must not become a substitute for thinking itself. Overreliance risks weakening the faculties defining human intelligence—creativity, reasoning, and memory.

The challenge in the future is to embrace AI mindfully, ensuring that we remain active participants in the cognitive process. If we treat AI as a partner rather than a crutch, we can unlock its full potential without sacrificing our own.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Yoga in the age of AI: Digital spirituality or algorithmic escapism?

Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.

No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?

Tech and wellness: The rise of AI-driven yoga tools

The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.

Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.

Yoga app

While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.

Beyond the mat: Virtual reality and immersive yoga

The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.

These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.

Can AI be a guru? Empathy, authority, and the limits of automation

One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?

Yoga

AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?

Community, loneliness, and digital yoga tribes

Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?

Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.

Digital policy and the politics of platformised spirituality

Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.

Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.

Yoga

The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.

Toward inclusive and ethical design in wellness tech

As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?

Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.

Toward a mindful tech future

As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.

Yoga

For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Breaking down the OEWG’s legacy: Hits, misses, and unfinished business

What is the OEWG?

The open-ended working groups (OEWGs) are a type of format present in the UN that is typically considered the most open, as the name suggests. It means that all UN member and observer states, intergovernmental organisations, and non-governmental organisations with the UN Economic and Social Council (ECOSOC) consultative status may attend public meetings of the working group. Yet, decisions are made by the UN member states.  There are various OEWGs at the UN. Here, we are addressing the one dealing with cybersecurity.

What does the OEWG on cybersecurity do? In plain language, it tries to find more common ground on what is allowed and what is not in cyberspace, and how to ensure adherence to these rules. In the UN language, the Cyber OEWG was mandated to ‘continue to develop the rules, norms, and principles of responsible behaviour of states, discuss ways for their implementation, and to study the possibility of establishing regular institutional dialogue with broad participation under the auspices of the UN.’

How was the OEWG organised? The OEWG was organised around an organisational session that discussed procedures and modus operandi, and substantive ones dealing with the matter, as well as intersessional meetings and town halls supplementing the discussions. The OEWG held 10 substantive sessions during its 5-year mandate, with the 11th and final session just around the corner in July 2025, where the group will adopt its Final report.


The OEWG through expert eyes: Achievements, shortfalls, and future goals

As the OEWG 2019–2025 process nears its conclusion, we spoke with cybersecurity experts to reflect on its impact and look ahead. Their insights address four key questions: (1) the OEWG’s most substantive contributions and shortcomings in global ICT security; (2) priorities for future dialogues on responsible state behavior in cyberspace; (3) the feasibility of consensus on a permanent multilateral mechanism; and (4) the potential relevance of such a mechanism in today’s divisive geopolitical climate. Their perspectives shed light on what the OEWG has achieved—and the challenges still facing international cyber governance.

 Face, Head, Person, Photography, Portrait, Blazer, Clothing, Coat, Jacket, Happy, Smile, Formal Wear, Suit, Body Part, Neck
Katherine Getao, Senior Research Fellow, DiploFoundation

The OEWG, as intended by its designers during the 2017 UNGGE process, has enabled broad, inclusive, voluntary participation in global cybersecurity policy discussions. Countries from all continents have chosen to participate and have gained a better understanding of state actions necessary to protect global peace and security in and through ICTs. The OEWG has enabled global agenda-setting, e.g. through the widespread adoption of a framework of action commitment to establish points of contact, it has stimulated and galvanised regional ICT and cybersecurity processes such as joint capacity-building, development of protocols, etc., examples being (in Africa) activities at the AU headquarters as well as RECs, notably ECOWAS. The OEWG also enables countries that are new to the domain to learn and build useful networks.

Read more

That said, the global picture for ICT security is still very uncertain and risky. ICT security involves what I term the “Robin Hood” effect, where the ingenuity and intelligence enabled by ICT can have equalising effects in conflicts between technologically advanced and weaker states. Whether the multilateral policy discussions and broad agreements have in any way tempered conflicts about or involving ICTs remains to be seen. My other observation about the OEWG is that by broadening the stakeholders, the agenda and content of the discussions and documents have grown and lack some of the coherence enjoyed by the UNGGE outcomes.

Regarding the agenda, I think the broad sections: emerging threats, norms, CBMs, international law, capacity building, way forward – are still relevant, but global current evènts are demonstrating that new issues will inevitably arise under these headings. My concern is more about process than agenda. Given the rapidly changing global environment, the agenda should remain fluid. I would suggest three  process additions:

  • Have an academic track, both to ensure that emerging issues and technologies are discussed as early as possible and to orient the emerging generation towards the norms, policies and laws. This could involve selected academic researchers, centres of excellence, online courses etc.
  • Insert the issue into ongoing ICT processes, maybe by having a ‘school’ or other type of side event to help participants correlate the issues they are already discussing with emerging global ICT peace and security policy, given that it is a cross-cutting issue.
  • Responsible states emerge from responsible leaders. The OEWG is now mature enough to seek national and regional champions of responsible state behaviour in cyberspace. Inserting this issue into the postures and statements of global opinion leaders would hopefully influence the behaviour of states.

States probably could reach consensus on the structure and function of a future permanent mechanism because, apart from the concerns about resourcing, building an institution is often an attractive, visible ‘quick win.’ In my view, however, I would not support the establishment of a purely policy-making institution in such a fluid, complex and practical field. Suitable institutions might be developed or supported in suitable implementation areas.

I think it is too early to develop a permanent mechanism. A 5-year (for example) revolving process approval would give some stability while remaining flexible and needs-based

 Accessories, Formal Wear, Tie, Clothing, Suit, Necktie, Adult, Male, Man, Person, Coat, Face, Head, Photography, Portrait, Blazer, Jacket, Shirt, Žan Tabak
Nemanja Malisevic, Senior Director of Digital Diplomacy, Microsoft

The Open-Ended Working Group’s most substantive contribution is its role in encouraging states to articulate national and regional positions on the application of international law in cyberspace. Over 30 countries, along with the African Union and EU, submitted formal positions. This growing body of documented perspectives has helped clarify how states interpret existing legal frameworks in cyberspace—an important step toward building a shared understanding of responsible state behavior online. Despite these positive developments, all things considered, the OEWG has not delivered many tangible outcomes that materially improve global cybersecurity. Key issues such as the cyber mercenary market, coordinated vulnerability disclosure, and the protection of public critical infrastructure remain largely unaddressed. The process has struggled to move beyond dialogue into actionable strategies.

Read more

The consensus-based nature of the OEWG allows a small number of states to block progress. Without genuine cooperation and constructive engagement from all participants, the process risks stagnation. Additionally, the current stakeholder modalities have proven inadequate. For cybersecurity discussions to be effective, they must include a diverse range of voices—technical experts, civil society, and system operators. Unfortunately, the OEWG has not provided a truly inclusive platform for these stakeholders to contribute meaningfully.

To advance responsible state behavior in cyberspace, future efforts may need to move beyond the limitations of the types of processes that we have traditionally seen in this space.  Governments should explore mechanisms that allow for real and tangible progress. Models like the Ottawa Declaration on cluster munitions and the Montreux Document on military contractors—though not directly applicable—offer interesting food for thought in this regard. A future approach should prioritize actionable strategies and more inclusive participation to address urgent cybersecurity challenges.

Whether states can reach consensus on a permanent mechanism for dialogue depends entirely on political will. The current geopolitical climate makes this a challenging prospect, but not an impossible one.  Ideally, such a mechanism would be state-led and permanent, operating on a single-track basis while incorporating meaningful multistakeholder participation. It should be designed not just for dialogue, but for action—equipped with the tools and authority to implement strategies that enhance global cybersecurity in practical, measurable ways.

The relevance and influence of a future permanent mechanism will hinge on its design, ambition and implementation. If it replicates the limitations of the current OEWG—particularly its susceptibility to deadlock and exclusion of key stakeholders—then it is unlikely to achieve meaningful progress. However, if it is action-oriented, inclusive, and strategically focused, it could become a powerful tool for fostering a more secure and stable cyberspace.

 Blonde, Hair, Person, Face, Head, Photography, Portrait, Happy, Smile, Body Part, Neck
Christina Rupp, Senior Policy Researcher Cybersecurity Policy and Resilience, Interface

The Open-ended Working Group 2021-2025 has made a lasting contribution to global discussions on ICT security by broadening participation and providing a platform for smaller delegations and underrepresented states to engage substantively in international discourse on cybersecurity policy. This more inclusive dialogue on responsible behavior in cyberspace has strengthened cross-regional coalition-building, fostered understanding across diverse perspectives, and – as repeatedly emphasized by the Group’s Chair – thus served as a confidence-building measure (CBM) in itself.

Read more

The adoption of three Annual Progress Reports (APRs) by consensus in 2022, 2023, and 2024 amidst a challenging political climate represents a notable achievement in sustaining multilateral dialogue on cybersecurity. These reports also reflect concrete, if modest, progress, including, inter alia, the establishment of a Points of Contact (PoC) directory, agreement on eight global cyber CBMs, and consensus on a comprehensive section addressing existing and potential threats to international peace and security stemming from the use of ICTs. However, translating dialogue into implementation has remained a challenge over the course of the OEWG’s deliberations. Persistent divisions – for example, over prioritizing the implementation of existing commitments versus the elaboration of new norms and referencing discussions on International Humanitarian Law – have limited the Group’s ability to move from consensus language to specific outcomes.

Looking ahead, discussions on cybersecurity in the context of the United Nations First Committee should shift toward operationalizing the existing framework for responsible state behavior. This framework – comprising, inter alia, 11 norms for responsible state behavior, existing international law including the UN Charter, eight global cyber-confidence building measures including the PoC directory, as well as 10 cyber capacity-building principles – offers sufficient tools to do so. What is needed now is to give enhanced meaning to their sometimes abstract language and align them with practical, on-the-ground realities. Bringing in expert briefers and adopting more interactive formats could invigorate discussions and support bridging gaps between technical, legal, political, and diplomatic communities.

Whether states can reach final consensus on the design of a future permanent mechanism on cybersecurity under UN auspices next month remains an open question, particularly given the fragile compromises and last-minute diplomacy that have characterized the final stages of APR negotiations over the past two years. Annex C of the 2024 APR outlines a solid basis of elements of future permanent mechanism, but key issues – particularly concerning dedicated thematic groups and stakeholder modalities – remain unresolved. A successful outcome in July will require both a high level of political will and a willingness to compromise from all states in order to agree on a clear roadmap that avoids duplication and overlaps, fosters deeper dialogue, and enables meaningful stakeholder contributions to support evidence-based policymaking on cybersecurity at the UN level.

 Blazer, Clothing, Coat, Jacket, Black Hair, Hair, Person, Adult, Male, Man, Formal Wear, Suit, Head, Kim Tae-yong
Eugene EG Tan,
Associate Research Fellow
Centre of Excellence for National Security
S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore (NTU)

Most of the OEWG 2021-25 has been conducted under a geopolitical storm, making any agreement to advance the framework on responsible state behaviour in the use of ICTs a hard-won consensus. But even if it seems like an evaluation that the glass is half full, there has been progress. The OEWG has at least three annual progress reports to show for the discussions that has gone on in the group, which is no mean feat considering the geopolitical situation. Any attempt to rollback state commitments made in the previous OEWG and UNGGE has also been met with a vigorous pushback by the majority of states, keeping much of the acquis intact. This is especially important in a time where international law has come under the cosh due to the actions of some states.

Read more

There has however been a substantial change in how discussions at the group has progressed. The longer mandate given to the OEWG has enabled the group to place more emphasis on the implementation of the framework (and reporting back to the framework), rather than being bogged down with the ideological differences that has long stalked the process. I think this action-oriented approach is useful to all stakeholders in the process – states, academics, civil society, and industries – because it enables feedback on which of the norms, capacity building, and confidence building measures have proven effective, and what has been less so. And this should continue into the future permanent mechanism.

How the future permanent mechanism will look like is unclear and would most certainly be a result of political agreement that is UN-acceptable to states and unfortunately minimises the role of non-state stakeholders. Non-state stakeholders will have to accept the modalities that the future mechanism agrees to. But this does not mean that the role of non-state stakeholders should stop or decrease, and it is incumbent on the states that see the value of non-state stakeholder participation to ensure non-state stakeholder voices remain heard and relevant to the discussions on responsible state behaviour. 

The numerous side events held on the sidelines of the OEWG are important in providing states and other stakeholders to deepen discussions and learn from the expertise of other states and stakeholders. This dialogue and knowledge sharing opportunity should be kept alive in the future mechanism in order to prevent the future mechanism from being siloed into a diplomatic endeavour. The future of the stability of cyberspace lies all in the hands of all stakeholders, and it would be richer if all stakeholders were involved in the process – we can only hope that the collective wisdom of all states will prevail at the final session in July.

 Face, Head, Person, Photography, Portrait, Adult, Female, Woman, Blazer, Clothing, Coat, Jacket, Happy, Smile, Blonde, Hair, Body Part, Neck
Yuliya Shlychkova, Vice President, Government Affairs and Public Policy, Kaspersky

In our view, the most significant achievement by the OEWG 2021-2025 was reaching an agreement to set up the Points of Contact Directory. This database serves as an important tool promoting practical international cooperation countering cybersecurity threats, allowing faster information exchange between competent bodies. When reflecting on the work of the OEWG 2021-2025, we would also like to highlight the informal intersessional consultative meetings with stakeholders organized by the Chair of OEWG H.E. Mr. Burhan Gafoor and thank him for his genuine interest in engaging in a direct conversation with a multi-stakeholder community.

Read more

The UN Report of the Group of Governmental Experts on Advancing Responsible State Behaviour in Cyberspace in the Context of International Security (A/76/135), which was published in 2021, suggested numerous considerations for agenda prioritisation. Among them, issues covered by norms F, G (critical infrastructure protection) and I (supply chain security) could be regarded as particularly important nowadays, as one can observe a constantly growing number of cyberthreats against critical infrastructure as well as supply chains.

We hope that a consensus on the structure and function of a future permanent mechanism for dialogue on ICT-related issues will eventually be reached. We also hope that member states will work out concrete parameters of such mechanism.

During times of geopolitical turbulence, any mechanism, which enables direct dialogue, is of special importance. That is why we believe that a future permanent mechanism would be highly relevant. It would also inherit the reputation of the OEWG as one of the premier platforms for the global dialogue on ICT-related issues. Our view is that, in order to increase its efficiency, the OEWG successor should keep channels of communication with the private sector open, which has vast expertise in the ICT sphere and could make a meaningful contribution to the depth of any future discussion. Their format could vary – for example, it could be similar to aforementioned Chair’s informal intersessional consultative meetings with stakeholders. At the same time, specific measures could be taken in order to make such consultations more relevant and useful for the purposes of a future permanent mechanism – in particular, by dividing all interested non-government stakeholders in thematic groups based on their area of activity, and then inviting them to specific rounds of consultations which are relevant to their expertise.

 Face, Head, Person, Photography, Portrait, Blazer, Clothing, Coat, Jacket, Formal Wear, Suit, Accessories, Tie, Happy, Smile, Glasses, Adult, Male, Man, Body Part, Neck, Shirt
Martin Xie, Director of Brussels Cybersecurity Transparency Center, Huawei

The 2021-2025 UN Open-Ended Working Group (OEWG) has served as a pivotal forum for global cyber norms diplomacy, though its legacy remains decidedly mixed. Its most enduring contribution lies in institutionalizing a universal dialogue platform—successfully bringing all UN member states into the conversation while establishing essential trust-building mechanisms, most notably the global Points of Contact directory. This procedural progress has laid a valuable foundation for future international discussions. However, substantial advancements, particularly in developing new norms, often encountered obstacles due to geopolitical tensions, resulting in reaffirmations of existing norms rather than the creation of new commitments. Complex issues, including ransomware and emerging technology norms, remain largely unresolved.

Read more

Moving forward, emphasis should shift from norm-setting to practical implementation and operational cooperation. As cyber threats rapidly evolve—including sophisticated AI-driven incidents, supply chain vulnerabilities, and persistent ransomware—the international community would benefit from actionable measures aimed at mitigating these risks. The technology industry’s practical experience can undoubtedly contribute to this effort. Enhanced public-private cooperation in threat assessment, vulnerability disclosure, and incident response can meaningfully improve global cyber resilience.

The consensus to establish a flexible, Programme of Action (PoA)-style follow-up mechanism post-2025 reflects a pragmatic step towards continuous diplomatic engagement. This mechanism aims to sustain dialogue and build constructively on previous OEWG efforts. Its effectiveness will largely depend on genuine multistakeholder participation, where technical insights are appropriately considered without political bias.

In an increasingly complex geopolitical environment, the mechanism’s most immediate value may be crisis management and maintaining open channels of communication. Its role will likely remain normative, focusing on fostering trust and predictability rather than enforcing norms strictly or attributing responsibility explicitly. For the technology industry, this landscape presents both ongoing compliance complexity and, more significantly, a strategic importance for constructive, collaborative participation in safeguarding the stability and security of our interconnected digital infrastructure.


Topic-by-topic: Diplo’s experts assess OEWG achievements and what comes next

In addition to external cybersecurity experts, we asked our own team—who have tracked the OEWG process since its inception—to share their analysis. They highlight key achievements over the past five years, identify gaps in the discussions, and offer predictions on where debates may lead during the final session and beyond.

Threats
 Weapon, Bow, Gun, Shooting

Over the past five years, the OEWG’s discussions on threats have really grown—not just in length, but in depth. As the threat landscape evolved, so did the conversations. What started as fairly general discussions have now become much more detailed and specific, with nearly a quarter of recent sessions focused on threats alone. That shift shows two things: first, how rapidly cyber risks like ransomware, state-sponsored attacks, and now even AI-driven threats are expanding; and second, that states are getting more comfortable talking openly about these issues.

One standout achievement is how much more states are leaning into cooperation. What’s interesting is that they’re not just naming threats anymore—they’re using just as much time to talk through how to tackle them together. That’s a big deal. We’ve seen more proposals for joint responses, support for capacity-building, and collective action than ever before. It’s a sign that this forum isn’t just about pointing out problems, but about working toward solutions.

There’s also been progress in how states describe and understand threats. In recent sessions, they flagged some new concerns—like the vulnerability of undersea cables and satellite communication networks. That’s a big leap in recognizing the physical infrastructure behind the internet and the risks we might not have talked about much before. States also raised alarms about cyber incidents targeting critical sectors like healthcare, aviation, and energy, and added AI to the mix, with specific concerns about the data used in machine learning and the misuse of AI to power more sophisticated attacks.

All of this points to a maturing conversation. We’re seeing a more layered understanding of threats, which makes space for more tailored, effective responses. And that’s exactly what global cooperation on cybersecurity should be aiming for: staying ahead of the curve, together.

What’s next?

As we head into the final session of the OEWG, expect threat discussions to stay front and centre—more detailed, more action-oriented, and more grounded in real-world risks. That momentum is set to carry into the UN’s future permanent mechanism, which will likely include a dedicated working group on threats. This won’t be just another talk shop. It’s being designed to take a cross-cutting, policy-driven approach—bringing in technical experts and other stakeholders to focus on concrete steps that boost resilience, protect critical infrastructure, and strengthen global stability in cyberspace.

The trend is clear: more specifics, more cooperation, more solutions. Future discussions will be about connecting policy and practice—turning shared concerns into collective action. So while the OEWG chapter might be closing, the real work on threat response is only just beginning.

 Face, Head, Person, Photography, Portrait, Body Part, Neck, Happy, Smile, Hairdresser, Haircut

Andrijana Gavrilovic
Head of Policy and Diplomatic Reporting, Diplo

Rules, norms and principles
 Body Part, Hand, Person, Handshake, Animal, Dinosaur, Reptile

The OEWG 2019-2025 established itself as the main space for open and inclusive talks about responsible state behaviour in cyberspace, despite a tough political environment marked by big power rivalries, ongoing conflicts, and deep divisions. One of the key achievements was reconfirming and reinforcing the existing normative framework. States didn’t just reaffirm the 11 voluntary, non-binding norms — they also moved the conversation forward on the Chair’s proposed voluntary Norms Implementation Checklist, This checklist breaks down each norm in more detail, pointing out specific actions countries can take both nationally and internationally. It’s now attached to the Zero Draft of the OEWG Final Report. 

This shift from just setting norms to focusing on how to actually put them into practice is an important step. While the OEWG helped make this shift happen, many countries have already started applying the norms on their own, which shows these principles are becoming more embedded in real-world policies compared to five years ago. Sharing experiences—especially around protecting critical infrastructure and supply chain security—is growing, showing a real push to turn these norms into action. Even though the checklist is still voluntary, most agree it’s a helpful tool for being more transparent, supporting self-checks, and boosting accountability among countries.

Another important role of the OEWG was as a place to openly discuss the future of the normative framework. The group provided a space for countries to talk about whether the current norms are enough or if new, possibly legally binding rules are needed to handle new cyber threats. Although they didn’t reach an agreement on this, the OEWG allowed different views to be shared in a fair and inclusive way, highlighting the need for ongoing dialogue and cooperation.

The OEWG also made progress in setting up a more permanent way to continue this work, while recognising the important role of regional organisations, civil society, and other non-governmental stakeholders. The Zero Draft highlights these contributions and stresses the value of consultations between meetings. Most importantly, it lays the groundwork for a permanent institutional mechanism, showing strong political will to keep international cooperation on cyber norms going beyond 2025.

What’s next?

Looking ahead, the Zero Draft notes that countries are still divided on whether new or legally binding norms are needed. While we don’t expect a final consensus at the closing session, there’s clear support for keeping the conversation going in a structured way. The Chair has suggested creating thematic working groups under a future permanent mechanism. This could be a practical way to move forward, focusing on putting norms into practice while also allowing room to revisit the rules debate in a more focused,  issue-specific context. These groups could be key to driving implementation at national, regional, and sector levels, while also making sure multiple stakeholders can stay involved.

However, in an era where military instruments increasingly shape the resolution of international disputes, to what extent can these peacetime-negotiated UN cyber norms remain relevant and applicable? How can voluntary norms—developed through consensus and intended to promote transparency, restraint, and responsible behaviour—be upheld when geopolitical tensions escalate into open conflict? And how might states, but also stakeholders, continue to apply and interpret these norms to distinguish responsible conduct from destabilising behaviour, even when trust and cooperation are under strain? These questions lie at the heart of ensuring that the normative framework remains a meaningful tool for promoting international stability and accountability—especially when the rules-based order itself is being tested. 

 Face, Head, Person, Photography, Portrait, Happy, Smile, Accessories, Earring, Jewelry, Body Part, Neck, Adult, Female, Woman

Anastasiya Kazakova
Cyber Diplomacy Knowledge Fellow, Diplo

International law
 Boat, Transportation, Vehicle, Chandelier, Lamp, Scale

Between 2021 and 2025, the OEWG continued to explore how international law—especially the UN Charter—applies to how states use ICTs. These discussions got more detailed over time, both in substantive and intersessional meetings. One positive trend has been the growing number of national statements on how international law applies in cyberspace. Over 100 countries have now shared their views, along with inputs from other organisations, which helped enrich the debate (see paragraph 40(f) of the Zero Draft). These contributions gave countries a chance to share understandings of how international law applies to cyberspace and of the state responsibilities in the use of ICTs.

States largely agreed on some key legal principles. They reaffirmed that state sovereignty and related international norms and principles still apply when it comes to ICT-related activities. They also confirmed that core principles from the UN Charter—the principle of non-intervention, the prohibition on the threat or use of force, and the peaceful settlement of disputes, non-intervention—remain valid and relevant in cyberspace.

What’s next?

Looking ahead to the final session, we expect some countries to push for the inclusion of international human rights law (IHRL) and international humanitarian law (IHL) in the Final Report. Even though these two areas were discussed quite a bit during this OEWG cycle, they’re currently missing from the international law section of the Zero Draft. Including them would also help ensure they’re part of the list of issues to be explored in any future discussions under the new permanent mechanism.

That said, one major divide still hasn’t been resolved: should there be a new, legally binding agreement on how international law applies to ICTs? This question continues to split the group, and that’s unlikely to change anytime soon.The proposal to create a thematic group focused on international law within the future permanent mechanism comes with its own set of challenges. Some countries might try to use this group to start negotiating a binding legal instrument. Others will likely resist that idea, which could cause the group to stall. As with the other proposed thematic groups, it will also be important to sort out who gets to participate—technical experts, legal advisers, policy practitioners, and others. So far, it’s unclear how non-governmental stakeholders will be involved, and some states remain sceptical about their role. There’s also a risk that dividing the work into multiple thematic groups could fragment the conversation, leading to siloed discussions rather than a holistic approach. And for countries with fewer resources, it may be hard to keep up across multiple parallel discussions, potentially giving more influence to those with larger delegations and greater capacity.

 Face, Head, Person, Photography, Portrait, Blonde, Hair, Adult, Female, Woman, Body Part, Neck, Clothing, Coat, Happy, Smile

Pavlina Ittelson
Executive Director, Diplo US

Capacity building
 Art, Drawing, Doodle, Crib, Furniture, Infant Bed

Cyber capacity-building has remained a cross-cutting pillar of the OEWG’s ICT-security agenda, sustaining momentum even as global tensions have made cooperation more difficult. Over the past five years, three key achievements stand out. 

First, the launch of the Global Roundtable on ICT Capacity-Building in New York in May 2024 marked a big step forward. It was the UN’s first-ever event focused solely on this topic, bringing together governments, industry, civil society, and academia to share experiences, highlight good practices, and discuss what’s still missing. The strong support for making this roundtable a regular fixture shows a real commitment to keeping everyone at the table and recognising the important role of non-state actors in strengthening capacity around the world. 

Second, countries have worked to set up practical tools to deliver on capacity-building. A key example is the Global ICT Security Cooperation and Capacity-Building Portal (GCSCP), which has received wide support as a neutral, government-led platform to coordinate capacity-building efforts. Alongside it, a needs-based capacity-building catalogue was also welcomed, provided both tools are connected with existing efforts to avoid duplication. Together, they’re meant to help match countries’ needs with available support.

Third, there’s been progress on the financing side. A voluntary UN trust fund was proposed to help finance projects and support participation from smaller delegations. It was broadly welcomed and is expected to complement other funding sources like the World Bank’s Cybersecurity Multi-Donor Trust Fund and ITU mechanisms.

What’s next? 

The OEWG’s final session needs to turn these ideas into something that works in practice. That includes agreeing on how often to hold the roundtables, how they’ll be run, and how they’ll connect to whatever permanent mechanism comes next. The goal is to make them more than just another meeting—to turn them into a space where real progress is made.

For the GCSCP and the capacity-building catalogue, a phased rollout is likely, starting with basic modules, a document repository, a Points of Contact directory, mapping of states’ needs, and a calendar of events. More sensitive features—like a norms-tracker proposed by Kuwait or an incident-reporting tool—are likely to be delayed, given concerns from some countries about data sharing and potential politicisation.

The trust fund will need clear criteria for who can access it, how it will be monitored, and how to avoid overlaps with existing efforts. There’s still uncertainty about whether it will attract enough consistent funding to meet the varied needs of developing countries.

Finally, there’s still no agreement on how the future permanent mechanism should handle capacity-building. Some countries want a dedicated working group, while others prefer to integrate it into all relevant discussions. The OEWG has built the basic framework—now the task is to finalise the details and make sure cyber capacity-building stays inclusive, focused on real needs, and able to adapt to future challenges.

 Face, Happy, Head, Person, Smile, Photography, Portrait, Dimples, Brown Hair, Hair

Salome Petit-Siemens
Master’s Student in International Security, Sciences Po

CBMS
 Weapon, Animal, Kangaroo, Mammal, Text

The launch of the Points of Contact (PoC) Directory in May 2024 stands, without a doubt, as the flagship achievement of the OEWG’s current mandate. Although the concept was first introduced in the 2021 OEWG report as one of the confidence-building measures (CBMs), the PoC Directory began to see real-world use by the end of the year 2024. Its operationalisation required active investment by the UN Secretariat, which organised the first system-wide ping test in June 2024 to verify the accuracy and responsiveness of entries. This was followed by a tabletop exercise planned for March 2025.

Another important milestone—closely tied to the PoC’s rollout—was the growing agreement on the need for a standardised communication template. At first, some states were hesitant, worried that it might make using the Directory too rigid or formal. But over time, the idea gradually gained traction. By April 2025, the Secretariat had circulated a draft template—an important step toward making communications between PoCs more consistent and efficient.

While not as visible, the globalisation of CBM practices has been arguably just as significant. Traditionally, CBM implementation was driven by regional organisations. However, 2024 witnessed a notable increase in cross-regional and multilateral initiatives, including global workshops, seminars, and training programmes. These efforts have contributed to a broader diffusion of CBM norms and practices beyond regional silos.

Yet, as we take stock of the OEWG’s progress over the past five years, one cannot ignore the gradual erosion of multistakeholder engagement in CBM discussions. As the OEWG approaches its final session, it is crucial not only to celebrate achievements but also to acknowledge areas where inclusivity and innovation have lagged behind.

The standardised template for PoC communication is likely to dominate discussions during the OEWG’s concluding session, especially given the Chair’s stated intention to include it in the final report.

The idea of integrating CBMs into the different thematic groups—something that’s part of the vision for a future permanent mechanism—was introduced in the last session. But most delegations seemed to prefer holding off on deep discussions until that mechanism is actually up and running. While spreading CBMs across different topics sounds good in theory, it also comes with risks. Moving these conversations out of their dedicated agenda item might risk politicising what has so far remained one of the OEWG’s most consensus-driven domains.

Ultimately, despite notable advancements, especially since 2024, the future of CBMs lies in their effective implementation, not necessarily future discussions at a global level. The next phase of development for the PoC Directory, in particular, hinges on actual use by states. Some key questions, first raised back in 2022, are still up in the air, including the precise scope of PoC functions. Only practice will provide answers to those questions.

 Photography, Face, Head, Person, Portrait, Happy, Smile, Body Part, Neck, Accessories, Hair

Jenne-Louise Roellinger
PhD student in International Relations, Sciences Po

Future mechanism
 Accessories, Sunglasses, Glasses, Earring, Jewelry, Text

One of the biggest achievements of the OEWG has been getting broad agreement that we need a regular, ongoing space to talk about international cybersecurity. Even with all the geopolitical tensions, countries have managed to keep talking about how a future mechanism could look. In fact, the OEWG has shown that dialogue—even between politically divided countries—is not just possible, but necessary, and increasingly seen as something that should be institutionalised.

Right now, there’s still no agreement on what this future mechanism should be. Some countries want to continue with the OEWG, while others are pushing for a Programme of Action (PoA). To find a middle ground, the Chair’s Zero Draft suggests setting up a permanent UN-backed body that would hold annual meetings and run several thematic working groups. It’s a compromise aimed at keeping everyone on board while ensuring the process keeps moving. This setup recognises the need for continuity, but its design must remain politically and procedurally neutral to secure broad support.

Still, it’s unclear whether this proposal addresses the concerns of non-governmental stakeholders, who were excluded from formal sessions, despite repeated calls for transparency and inclusion. Although intersessional consultations offered some space for engagement, many in the civil society, the private sector, and the technical community expressed concern that their expertise and operational relevance were not adequately reflected in the negotiation process. 

What’s next?

If countries can agree on a final report, and we shouldn’t rule that out—especially given recent signs of cooperation between Russia and the US at the UNGA—it will likely support the idea of a permanent institutional mechanism, though maybe without naming it outright. That would give the UNGA First Committee a chance to adopt a resolution during its 80th session later this year that formally launches the new framework. Such an outcome would mark a major step forward. We could see continued work starting in 2026 through annual meetings, thematic working groups, and inclusive consultations, as the Chair has proposed.

But if consensus doesn’t happen, the Chair might release a final report that lays out where countries agree and attaches statements from states on where they still disagree. At the moment, three main positions seem to be taking shape. One group of states backs the PoA model—basically a single-track, more inclusive process with full multistakeholder participation. Another group wants to stick with the OEWG as it is now, including the accreditation-based model for stakeholder participation agreed in 2022. A third group is pushing for a government-only, multilateral setup focused on five thematic pillars: threats, norms, international law, confidence-building, and capacity-building. These states also express strong reservations about continued stakeholder involvement in future UN cyber discussions.

These disagreements—about what the institutional setup should be, what issues to cover, and how stakeholders should be involved—highlight how politically tricky these negotiations are. And they’ll likely shape whatever comes next after the OEWG ends. If the divisions continue, we might see competing resolutions in the First Committee, which would mean a vote—and that increases the chance of fragmentation and less overall support for any future mechanism. Some delegations have already warned against this path, noting that splitting resources across multiple tracks could stretch everyone too thin. Yet in today’s fractured geopolitical landscape, the risk of a divided outcome in cyber diplomacy is not just possible—it’s increasingly likely.

 Face, Head, Person, Photography, Portrait, Happy, Smile, Accessories, Earring, Jewelry, Body Part, Neck, Adult, Female, Woman

Anastasiya Kazakova
Cyber Diplomacy Knowledge Fellow, Diplo


Stay tuned: Unpacking the OEWG’s impact with reports and events

Buckle up — we’re heading into the final phase of the OEWG! As the process wraps up, we’ll be tracking every development, publishing an in-depth report, and hosting events to reflect on the OEWG’s legacy, lessons learned, and what lies ahead. Whether you’re a long-time observer or just tuning in, there’s something for everyone.

Follow the final session live:
Our dedicated event page will feature AI-generated session reports, updated in near real-time to help you stay on top of the discussions as they unfold.

For seasoned negotiators:
We’ve also analysed the text of the Zero Draft and its Rev 1, which delegations will be negotiating during the final session — helping you navigate the proposals, sticking points, and emerging consensus.