ChatGPT and the rising pressure to commercialise AI in 2026

The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollout of its low-cost subscription tier, ChatGPT Go, in all countries where the model is supported. After debuting in India in August 2025 and expanding to Singapore the following month, the USD 8-per-month tier marks OpenAI’s most direct attempt yet to broaden paid access while maintaining assurances that advertising will not be embedded into ChatGPT’s prompts.

The move has been widely interpreted as a turning point in the way AI models are monetised. To date, most major AI providers have relied on a combination of external investment, strategic partnerships, and subscription offerings to sustain rapid development. Expectations of transformative breakthroughs and exponential growth have underpinned investor confidence, reinforcing what has come to be described as the AI boom.

Against this backdrop, OpenAI’s long-standing reluctance to embrace advertising takes on renewed significance. As recently as October 2024, chief executive Sam Altman described ads as a ‘last resort’ for the company’s business model. Does that position (still) reflect Altman’s confidence in alternative revenue streams, and is OpenAI simply the first company to bite the ad revenue bullet before other AI ventures have mustered the courage to do so?

ChatGPT, ads, and the integrity of AI responses

Regardless of one’s personal feelings about ad-based revenue, the facts about its essentiality are irrefutable. According to Statista’s Market Insights research, the worldwide advertising market has surpassed USD 1 trillion in annual revenue. With such figures in mind, it seems like a no-brainer to integrate ads whenever and wherever possible.

Furthermore, relying solely on substantial but irregular cash injections is not a reliable way to keep the lights on for a USD 500 billion company, especially in the wake of the RAM crisis. As much as the average consumer would prefer to use digital services without ads, coming up with an alternative and well-grounded revenue stream is tantamount to financial alchemy. Advertising remains one of the few monetisation models capable of sustaining large-scale platforms without significantly raising user costs.

For ChatGPT users, however, the concern centres less on the mere presence of ads and more on how advertising incentives could reshape data use, profiling practices, and the handling of conversational inputs. OpenAI has pleaded with its users to ‘trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising’. Altman’s company has also guaranteed that user data and conversations will remain protected and will never be sold to advertisers.

Such bold statements are never given lightly, meaning Altman fully stands behind his company’s words and is prepared to face repercussions should he break his promises. Since OpenAI is privately held, shifts in investor confidence following the announcement are not visible through public market signals, unlike at publicly listed technology firms. User count remains the most reliable metric for observing how ChatGPT is perceived by its target audience.

Competitive pressure behind ads in ChatGPT

Introducing ads to ChatGPT would be more than a simple change to how OpenAI makes money. Advertising can influence how the model responds to users, even if ads are not shown directly within the answers. Business pressure can still shape how information is presented through prompts. For example, certain products or services could be described more positively than others, without clearly appearing as advertisements or endorsements.

Recommendations raise particular concern. Many users turn to ChatGPT for advice or comparisons before making important purchases. If advertising becomes part of the model’s business, it may become harder for users to tell whether a suggestion is neutral or influenced by commercial interests. Transparency is also an issue, as the influence is much harder to spot in a chat interface than on websites that clearly label ads with banners or sponsored tags.

Three runners at a starting line wearing bibs with AI company logos, symbolising competition over advertising and monetisation in AI models, initiated by ChatGPT

While these concerns are valid, competition remains the main force shaping decisions across the AI industry. No major company wants its model to fall behind rivals such as ChatGPT, Gemini, Claude, or other leading systems. Nearly all of these firms have faced public criticism or controversy at some point, forcing them to adjust their strategies and work to rebuild user trust.

The risk of public backlash has so far made companies cautious about introducing advertising. Still, this hesitation is unlikely to last forever. By moving first, OpenAI absorbs most of the initial criticism, while competitors get to stand back, watch how users respond, and adjust their plans accordingly. If advertising proves successful, others are likely to follow, drawing on OpenAI’s experience without bearing the brunt of the growing pains. To quote Arliss Howard’s character in Moneyball: ‘The first guy through the wall always gets bloody’.

ChatGPT advertising and governance challenges

Following the launch of ChatGPT Go, lawmakers and regulators may need to reconsider how existing legal safeguards apply to ad-supported LLMs. Most advertising rules are designed for websites, apps, and social media feeds, rather than systems that generate natural-language responses and present them as neutral or authoritative guidance.

The key question is: which rules should apply? Advertising in chatbots may not resemble traditional ads, muddying the waters for regulation under digital advertising rules, AI governance frameworks, or both. The uncertainty matters largely because different rules come with varying disclosure, transparency, and accountability requirements.

Disclosure presents a further challenge for regulators. On traditional websites, sponsored content is usually labelled and visually separated from editorial material. In an LLM interface such as ChatGPT, however, any commercial influence may appear in the flow of an answer itself. This makes it harder for users to distinguish content shaped by commercial considerations from neutral responses.

In the European Union, this raises questions about how existing regulatory frameworks apply. Advertising in conversational AI may intersect with rules on transparency, manipulation, and user protection under current digital and AI legislation, including the AI Act, the Digital Services Act, and the Digital Markets Act. Clarifying how these frameworks operate in practice will be important as conversational AI systems continue to evolve.

ChatGPT ads and data governance

In the context of ChatGPT, conversational interactions can be more detailed than clicks or browsing history. Prompts may include personal, professional, or sensitive information, which requires careful handling when introducing advertising models. Even without personalised targeting, conversational data still requires clear boundaries. As AI systems scale, maintaining user trust will depend on transparent data practices and strong privacy safeguards.

Then, there’s data retention. Advertising incentives can increase pressure to store conversations for longer periods or to find new ways to extract value from them. For users, this raises concerns about how their data is handled, who has access to it, and how securely it is protected. Even if OpenAI initially avoids personalised advertising, the lingering allure will remain a central issue in the discussion about advertising in ChatGPT, not a secondary one.

Clear policies around data use and retention will therefore play a central role in shaping how advertising is introduced. Limits on how long conversations are stored, how data is separated from advertising systems, and how access is controlled can help reduce user uncertainty. Transparency around these practices will be important in maintaining confidence as the platform evolves.

Simultaneously, regulatory expectations and public scrutiny are likely to influence how far advertising models develop. As ChatGPT becomes more widely used across personal, professional, and institutional settings, decisions around data handling will carry broader implications. How OpenAI balances commercial sustainability with privacy and trust may ultimately shape wider norms for advertising in conversational AI.

How ChatGPT ads could reshape the AI ecosystem

We have touched on the potential drawbacks of AI models adopting an ad-revenue model, but what about the benefits? If ChatGPT successfully integrates advertising, it could set an important precedent for the broader industry. As the provider of one of the most widely used general-purpose AI systems, OpenAI’s decisions are closely watched by competitors, policymakers, and investors.

One likely effect would be the gradual normalisation of ad-funded AI assistants. If advertising proves to be a stable revenue source without triggering significant backlash, other providers may view it as a practical path to sustainability. Over time, this could shift user expectations, making advertising a standard feature rather than an exception in conversational AI tools.

Advertising may also intensify competitive pressure on open, academic, or non-profit AI models. Such systems often operate with more limited funding and may struggle to match the resources of ad-supported platforms such as ChatGPT. As a result, the gap between large commercial providers and alternative models could widen, especially in areas such as infrastructure, model performance, and distribution.

Taken together, these dynamics could strengthen the role of major AI providers as gatekeepers. Beyond controlling access to technology, they may increasingly influence which products, services, or ideas gain visibility through AI-mediated interactions. Such a concentration of influence would not be unique to AI, but it raises familiar questions about competition, diversity, and power in digital information ecosystems.

ChatGPT advertising and evolving governance frameworks

Advertising in ChatGPT is not simply a business decision. It highlights a broader shift in the way knowledge, economic incentives, and large-scale AI systems interact. As conversational AI becomes more embedded in everyday life, these developments offer an opportunity to rethink how digital services can remain both accessible and sustainable.

For policymakers and governance bodies, the focus is less on whether advertising appears and more on how it is implemented. Clear rules around transparency, accountability, and user protection can help ensure that conversational AI evolves in ways that support trust, choice, and fair competition, while allowing innovation to continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Inside NeurIPS 2025: How AI research is shifting focus from scale to understanding

For over three decades, the Conference on Neural Information Processing Systems (NeurIPS) has played a pivotal role in shaping the field of AI research. What appears at the conference often determines what laboratories develop, what companies implement, and what policymakers ultimately confront. In this sense, the conference functions not merely as an academic gathering, but as an early indicator of where AI is heading.

The 2025 awards reflected the field at a moment of reassessment. After years dominated by rapid scaling, larger datasets, and unprecedented computational power, researchers are increasingly questioning the consequences of that growth. This year’s most highly recognised papers did not focus on pushing benchmarks marginally higher. Instead, they examined whether today’s AI systems genuinely understand, generalise, and align with human expectations.

The following sections detail the award-winning research, highlighting the problems each study addresses, its significance, and its potential impact on the future of AI.

How one paper transformed computer vision over the period of ten years

Faster R‑CNN: Towards Real-Time Object Detection with Region Proposal Networks

 Person, Walking, Clothing, Shorts, Adult, Male, Man, Female, Woman, Lighting, People, Indoors, Footwear, Shoe, Head, Skirt, Urban

One of the highlights of NeurIPS 2025 was the recognition of a paper published a decade earlier that has influenced modern computer vision. It introduced a new way of detecting objects in images that remains central to the field today.

Before this contribution, state‑of‑the‑art object detection systems relied on separate region proposal algorithms to suggest likely object locations, a step that was both slow and brittle. The authors changed that paradigm by embedding a region proposal network directly into the detection pipeline. By sharing full-image convolutional features between the proposal and detection stages, the system reduced the cost of generating proposals to almost zero while maintaining high accuracy.

The design proved highly effective on benchmark datasets and could run near real‑time on contemporary GPUs, allowing fast and reliable object detection in practical settings. Its adoption paved the way for a generation of two-stage detectors. It sparked a wave of follow-on research that has shaped both academic work and real-world applications, from autonomous driving to robotics.

The recognition of this paper, more than a decade after its publication, underscores how enduring engineering insights can lay the foundation for long-term progress in AI. Papers that continue to influence research and applications years after they first appeared offer a helpful reminder that the field values not just novelty but also lasting contribution.

Defining the true limits of learning in real time

Optimal Mistake Bounds for Transductive Online Learning

 Electronics, Hardware, Computer Hardware

While much of NeurIPS 2025 focused on practical advances, the conference also highlighted the continued importance of theoretical research. One of the recognised studies addressed a fundamental question in a field called online learning theory, which studies how systems can make sequential predictions and improve over time as they receive feedback.

The paper considered a system known as a learner, meaning any entity that makes predictions on a series of problems, and examined how much it can improve if it has access to the problems in advance but does not yet know the correct answers for them, referred to as labels.

The study focused on a method called transductive learning, in which the learner can take into account all upcoming problems without knowing their labels, allowing it to make more accurate predictions. Through precise mathematical analysis, the authors derived tight limits on the number of mistakes a learner can make in this setting.

By measuring problem difficulty using the Littlestone dimension, they demonstrated precisely how transductive learning reduces errors compared to traditional step-by-step online learning, thereby solving a long-standing theoretical problem.

Although the contribution is theoretical, its implications are far from abstract. Many real-world systems operate in environments where data arrives continuously, but labels are scarce or delayed. Recommendation systems, fraud detection pipelines and adaptive security tools all depend on learning under uncertainty, making an understanding of fundamental performance limits essential.

The recognition of this paper at NeurIPS 2025 reflects its resolution of a long-standing open problem and its broader significance for the foundations of machine learning. At a time when AI systems are increasingly deployed in high-stakes settings, clear theoretical guarantees remain a critical safeguard against costly and irreversible errors.

How representation superposition explains why bigger models work better

Superposition Yields Robust Neural Scaling

 Computer, Electronics, Laptop, Pc, Computer Hardware, Computer Keyboard, Hardware, Monitor, Screen, Cup, Person, Text

The remarkable trend that larger language models tend to perform better has been well documented, but exactly why this happens has been less clear. Researchers explored this question by investigating the role of representation superposition, a phenomenon where a model encodes more features than its nominal dimensions would seem to allow.

By constructing a simplified model informed by real data characteristics, the authors demonstrated that when superposition is strong, loss decreases in a predictable manner as the model size increases. Under strong superposition, overlapping representations produce a loss that scales inversely with model dimension across a broad range of data distributions.

That pattern matches observations from open‑source large language models and aligns with recognised scaling laws such as those described in the Chinchilla paper.

The insight at the heart of the study is that overlap in representations can make large models more efficient learners. Rather than requiring each feature to occupy a unique space, models can pack information densely, allowing them to generalise better as they grow. Such an explanation helps to explain why simply increasing model size often yields consistent improvements in performance.

Understanding the mechanisms behind neural scaling laws is important for guiding future design choices. It provides a foundation for building more efficient models and clarifies when and why scaling may cease to deliver gains at higher capacities.

Questioning the limits of reinforcement learning in language models

Does Reinforcement Learning Really Incentivise Reasoning Capacity in LLMs Beyond the Base Model?

 Furniture, Table, Adult, Female, Person, Woman, Desk, Computer Hardware, Electronics, Hardware, Monitor, Screen, Head

Reinforcement learning has been widely applied to large language models with the expectation that it can improve reasoning and decision-making. By rewarding desirable outputs, developers hope to push models beyond their base capabilities and unlock new forms of reasoning.

The study examines whether these improvements truly reflect enhanced reasoning or simply better optimisation within the models’ existing capacities. Through a systematic evaluation across tasks requiring logic, planning and multi-step inference, the authors find that reinforcement learning often does not create fundamentally new reasoning skills. Instead, the gains are largely confined to refining behaviours that the base model could already perform.

These findings carry important implications for the design and deployment of advanced language models. They suggest that current reinforcement learning techniques may be insufficient for developing models capable of independent or genuinely novel reasoning. As AI systems are increasingly tasked with complex decision-making, understanding the true limits of reinforcement learning becomes essential to prevent overestimating their capabilities.

The research encourages a more cautious and evidence-based approach, highlighting the need for new strategies if reinforcement learning is to deliver beyond incremental improvements.

Revealing a hidden lack of diversity in language model outputs

Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)

 Adult, Female, Person, Woman, Helmet, Head

Large language models are often celebrated for their apparent creativity and flexibility. From essays to advice and storytelling, they appear capable of generating an almost limitless range of responses. Closer examination, however, reveals a more troubling pattern. Despite differences in architecture, scale and training data, many leading models tend to respond to open-ended prompts in strikingly similar ways.

The research examines this phenomenon through a carefully designed benchmark built around real-world questions that do not have a single correct answer. Rather than focusing on factual accuracy, the authors study how models behave when judgement, nuance, and interpretation are required.

Across a wide range of prompts, responses repeatedly converge on the same themes, tones and structures, producing what the authors describe as a form of collective behaviour rather than independent reasoning.

The study’s key contribution lies in its evaluation of existing assessment methods. Automated metrics commonly used to compare language models often fail to detect this convergence, even when human evaluators consistently prefer responses that display greater originality, contextual awareness, or diversity of perspective. As a result, models may appear to improve according to standard benchmarks while becoming increasingly uniform in practice.

The implications extend beyond technical evaluation. When language models are deployed at scale in education, media production, or public information services, the homogeneity of output risks narrowing the range of ideas and viewpoints presented to users. Instead of amplifying human creativity, such systems may quietly reinforce dominant narratives and suppress alternative framings.

The recognition of this paper signals a growing concern about how progress in language modelling is measured. Performance gains alone no longer suffice if they come at the cost of diversity, creativity, and meaningful variation. As language models play an increasingly important role in shaping public discourse, understanding and addressing collective behavioural patterns becomes a matter of both societal and technical importance.

Making large language models more stable by redesigning attention

Gated Attention for Large Language Models: Non-Linearity, Sparsity, and Attention-Sink-Free

 Robot, Light, Smoke Pipe

As large language models grow in size and ambition, the mechanisms that govern how they process information have become a central concern. Attention, the component that allows models to weigh different parts of input, sits at the core of modern language systems.

Yet, the same mechanism that enables impressive performance can also introduce instability, inefficiency, and unexpected failure modes, particularly when models are trained on long sequences.

The research focuses on a subtle but consequential weakness in standard attention designs. In many large models, certain tokens accumulate disproportionate influence, drawing attention away from more relevant information. Over time, this behaviour can distort the way models reason across long contexts, leading to degraded performance and unpredictable outputs.

To address this problem, the authors propose a gated form of attention that enables each attention head to dynamically regulate its own contribution. By introducing non-linearity and encouraging sparsity, the approach reduces the dominance of pathological tokens and leads to more balanced information flow during training and inference.

The results suggest that greater reliability does not necessarily require more data or larger models. Instead, careful architectural choices can significantly improve stability, efficiency, and performance. Such improvements are particularly relevant as language models are increasingly deployed in settings where long context understanding and consistent behaviour are essential.

At a time when language models are moving from experimental tools to everyday infrastructure, refinements of this kind highlight how progress can come from re-examining the foundations rather than simply scaling them further.

Understanding why models do not memorise their data

Why Diffusion Models Don’t Memorise: The Role of Implicit Dynamical Regularisation in Training

 Diagram

Generative AI has advanced at an extraordinary pace, with diffusion models now powering image generation, audio synthesis, and early video creation tools. A persistent concern has been that these systems might simply memorise their training data, reproducing copyrighted or sensitive material rather than producing genuinely novel content.

The study examines the training dynamics of diffusion models in detail, revealing a prolonged phase during which the models generate high-quality outputs that generalise beyond their training examples. Memorisation occurs later, and its timing grows predictably with the size of the dataset. In other words, generating new and creative outputs is not an accidental by-product but a natural stage of the learning process.

Understanding these dynamics has practical significance for both developers and regulators. It shows that memorisation is not an inevitable feature of powerful generative systems and can be managed through careful design of datasets and training procedures. As generative AI moves further into mainstream applications, knowing when and how models memorise becomes essential to ensuring trust, safety, and ethical compliance.

The findings provide a rare theoretical foundation for guiding policy and deployment decisions in a rapidly evolving landscape. By illuminating the underlying mechanisms of learning in diffusion models, the paper points to a future where generative AI can be both highly creative and responsibly controlled.

Challenging long-standing assumptions in reinforcement learning

1000 Layer Networks for Self-Supervised Reinforcement Learning: Scaling Depth Can Enable New Goal-Reaching Capabilities

 Electronics

Reinforcement learning has often been presented as a route to truly autonomous AI, yet practical applications frequently struggle due to fragile training processes and the need for carefully designed rewards. In a surprising twist, researchers have found that increasing the depth of neural networks alone can unlock new capabilities in self-supervised learning settings.

By constructing networks hundreds of layers deep, agents learn to pursue goals more effectively without explicit instructions or rewards. The study demonstrates that depth itself can act as a substitute for hand-crafted incentives, enabling the system to explore and optimise behaviour in ways that shallower architectures cannot.

The findings challenge long-held assumptions about the limits of reinforcement learning and suggest a shift in focus from designing complex reward functions to designing more capable architectures. Potential applications span robotics, autonomous navigation, and simulated environments, where specifying every objective in advance is often impractical.

The paper underlines a broader lesson for AI, showing that complexity in structure can sometimes achieve what complexity in supervision cannot. For systems that must adapt and learn in dynamic environments, architectural depth may be a more powerful tool than previously appreciated.

What NeurIPS 2025 reveals about the state of AI

 Pattern, Computer Hardware, Electronics, Hardware, Monitor, Screen, Accessories

Taken together, research recognised at NeurIPS 2025 paints a picture of a field entering a more reflective phase. AI is no longer defined solely by the size of models. Instead, attention is turning to understanding learning dynamics, improving evaluation frameworks, and ensuring stability and reliability at scale.

The year 2025 did not simply reward technical novelty; it highlighted work that questions assumptions, exposes hidden limitations, and proposes more principled foundations for future systems. As AI becomes an increasingly influential force in society, this shift may prove to be one of the most important developments in the field’s evolution.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

How AI agents are quietly rebuilding the foundations of the global economy 

AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift in how businesses and institutions approach automation and decision-making.

Market forecasts suggest that 2026 and the years ahead will bring an even larger boom in AI agents, driven by massive global investment and expanding real-world deployment. As a result, AI agents are increasingly viewed as a foundational layer of the next phase of the digital economy.

Computer, Electronics, Laptop, Pc, Hardware, Computer Hardware, Monitor, Screen, Computer Keyboard, Body Part, Hand, Person

What are AI agents, and why do they matter

AI agents are autonomous software systems designed to perceive information, make decisions, and act independently to achieve specific goals. Unlike traditional AI applications or conventional AI tools, which respond to prompts or perform single functions and often require direct supervision, AI agents are proactive and operate across multiple domains.

They can plan, adapt, and coordinate various steps across workflows, anticipating needs, prioritising tasks, and collaborating with other systems or agents without constant human intervention.

As a result, AI agents are not just incremental upgrades to existing software; they represent a fundamental change in how organisations leverage technology. By taking ownership of complex processes and decision-making workflows, AI agents enable businesses to operate at scale, adapt more rapidly to change, and unlock opportunities that were previously impossible with traditional AI tools alone. 

They fundamentally change how AI is applied in enterprise environments, moving from task automation to outcome-driven execution. 

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Why AI agents became a breakout trend in 2025

Several factors converged in 2025 to push AI agents into the mainstream. Advances in large language models, improved reasoning capabilities, and lower computational costs made agent-based systems commercially viable. At the same time, enterprises faced growing pressure to increase efficiency amid economic uncertainty and labour constraints. 

The fact is that AI agents gained traction not because of their theoretical promise, but because they delivered measurable results. Companies deploying AI agents reported faster execution, lower operational overhead, and improved scalability across departments. As adoption accelerated, AI agents became one of the most visible indicators of where new technology was heading next.

 Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Global investment is accelerating the AI agents boom

Investment trends underline the strategic importance of AI agents. Venture capital firms, technology giants, and state-backed innovation funds are allocating significant capital to agent-based platforms, orchestration frameworks, and AI infrastructure. These investments are not experimental in nature; they reflect long-term bets on autonomous systems as core business infrastructure.

Large enterprises are committing internal budgets to AI agent deployment, often integrating them directly into mission-critical operations. As funding flows into both startups and established players, competition is intensifying, further accelerating innovation and adoption across global markets. 

The AI agents market is projected to surge from approximately $7.92 billion in 2025 to surpass $236 billion by 2034, driven by a compound annual growth rate (CAGR) exceeding 45%.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Where AI agents are already being deployed at scale

Agent-based systems are no longer limited to experimental use, as adoption at scale is taking shape across various industries. In finance, AI agents manage risk analysis, fraud detection, reporting workflows, and internal compliance processes. Their ability to operate continuously and adapt to changing data makes them particularly effective in data-intensive environments.

In business operations, AI agents are transforming customer support, sales operations, procurement, and supply chain management. Autonomous agents handle inquiries, optimise pricing strategies, and coordinate logistics with minimal supervision.

One of the clearest areas of AI agent influence is software development, where teams are increasingly adopting autonomous systems for code generation, testing, debugging, and deployment. These systems reduce development cycles and allow engineers to focus on higher-level design and architecture. It is expected that by 2030, around 70% of developers will work alongside autonomous AI agents, shifting human roles toward planning, design, and orchestration.

Healthcare, research, and life sciences are also adopting AI agents for administrative automation, data analysis, and workflow optimisation, freeing professionals from repetitive tasks and improving operational efficiency.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

The economic impact of AI agents on global productivity

The broader economic implications of AI agents extend far beyond individual companies. At scale, autonomous AI systems have the potential to boost global productivity by eliminating structural inefficiencies across various industries. By automating complex, multi-step processes rather than isolated tasks, AI agents compress decision timelines, lower transaction costs, and remove friction from business operations.

Unlike traditional automation, AI agents operate across entire workflows in real time. It enables organisations to respond more quickly to market changes and shifts in demand, thereby increasing operational agility and efficiency at a systemic level.

Labour markets will also evolve as agent-based systems become embedded in daily operations. Routine and administrative roles are likely to decline, while demand will rise for skills related to oversight, workflow design, governance, and strategic management of AI-driven operations. Human value is expected to shift toward planning, judgement, and coordination. 

Countries and companies that successfully integrate autonomous AI into their economic frameworks are likely to gain structural advantages in terms of efficiency and growth, while those that lag behind risk falling behind in an increasingly automated global economy.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

AI agents and the future evolution of AI 

The momentum behind AI agents shows no signs of slowing. Forecasts indicate that adoption will expand rapidly in 2026 as costs decline, standards mature, and regulatory clarity improves. For organisations, the strategic question is no longer whether AI agents will become mainstream, but how quickly they can be integrated responsibly and effectively. 

As AI agents mature, their influence will extend beyond business operations to reshape global economic structures and societal norms. They will enable entirely new industries, redefine the value of human expertise, and accelerate innovation cycles, fundamentally altering how economies operate and how people interact with technology in daily life. 

The widespread integration of AI agents will also reshape the world we know. From labour markets to public services, education, and infrastructure, societies will experience profound shifts as humans and autonomous systems collaborate more closely.

Companies and countries that adopt these technologies strategically will gain a structural advantage, while those that lag behind risk falling behind in both economic and social innovation.

Ultimately, AI agents are not just another technological advancement; they are becoming a foundational infrastructure for the future economy. Their autonomy, intelligence, and scalability position them to influence how value is created, work is organised, and global markets operate, marking a turning point in the evolution of AI and its role in shaping the modern world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘All is fair in RAM and war’: RAM price crisis in 2025 explained

If you are piecing together a new workstation or gaming rig, or just hunting for extra RAM or SSD storage, you have stumbled into the worst possible moment. With GPU prices already sky-high, the recent surge in RAM and storage costs has hit consumers hard, leaving wallets lighter and sparking fresh worries about where the tech market is headed.

On the surface, the culprit behind these soaring prices is a sudden RAM shortage. Prices for 32GB and 64GB sticks have skyrocketed by as much as 600 percent, shelves are emptying fast, and the balance between supply and demand has completely unraveled.

But blaming the sky-high prices on empty shelves only tells part of the story. Why has affordable RAM vanished? How long will this chaos last? And most intriguingly, what role does AI play in this pricing storm?

Tracing the causes of RAM pricing spikes

The US tariffs imposed on China on 1 August 2025, played a substantial role in the increase in DRAM prices. Global imports of various goods have become more costly, investments and workforce onboarding have been halted, and many businesses relying on imports have adopted a ‘wait-and-see’ approach to how they will do business going forward.

However, the worst was yet to come. On 3 December, Micron, one of the world’s leading manufacturers of data storage and computer memory components, announced its withdrawal from the RAM consumer market, citing a ‘surge in demand for memory and storage’ driven by supply shortages of memory and storage for AI data centres.

With Micron out of the picture, we are left with only two global consumer RAM and high-bandwidth memory (HBM) manufacturers: Samsung and SK Hynix. While there are countless RAM brands on the market, with Corsair, Kingston, and Crucial leading the charge, all of them rely on the three aforementioned suppliers for memory chips.

Micron’s exit was likely met with obscured glee by Samsung and SK Hynix of South Korea, who seized the opportunity to take over Crucial’s surrendered territory and set the stage for their DRAM/HBM supply duel. The latter supplier was quick to announce the completion of its M15X semiconductor fabrication plant (fab), but warned that RAM supply constraints are likely to last until 2028 at the earliest.

Amid the ruckus, rumours surfaced that Samsung would be sunsetting its SATA SSD production, which the company quickly extinguished. On the contrary, the Korean giant announced its intention to dethrone SK Hynix as the top global RAM provider, with more than 80 percent of its projected profits coming directly from Samsung Electronics.

Despite their established market shares, both enterprises were caught off guard when their main rival threw in the towel, and their production facilities are unable, at current capacity, to accommodate the resulting market void. It is nigh certain that the manufacturers will use their newly gained market dominance to their advantage, setting prices based on their profit margins and customers’ growing demand for their products. In a nutshell, they have the baton, and we must play to their tune.

AI infrastructure and the reallocation of RAM supply

Micron, deeming commodity RAM a manufacturing inconvenience, made a move that was anything but rash. In October, Samsung and SK Hynix joined forces with OpenAI to supply the AI giant with a monthly batch of 900,000 DRAM wafers. OpenAI’s push to enhance its AI infrastructure and development was presumably seen by Micron as a gauntlet thrown by its competitors, and Crucial’s parent company took no time in allocating its forces to a newly opened front.

Lured by lucrative, long-term, high-volume contracts, all three memory suppliers saw AI as an opportunity to open new income streams that would not dry up for years to come. While fears of the AI bubble bursting are omnipresent and tangible, neither Samsung, SK Hynix, nor Micron are overly concerned about what the future holds for LLMs and AGI, as long as they continue to get their RAM money’s worth (literally).

AI has expanded across multiple industries, and three competitors judged Q4 2025 the opportune time to put all their RAM eggs in one basket. AI as a business model has yet to reach profitability, but corporate investors poured more than USD 250 billion into AI in 2024 alone. Predictions for 2025 have surpassed the USD 500 billion mark, but financiers will inevitably grow more selective as the AI startup herd thins and predicted cash cows fail to deliver future profits.

To justify massive funding rounds, OpenAI, Microsoft, Google, and other major AI players need to keep their LLMs in a perpetual growth cycle by constantly expanding their memory capacity. A hyperscale AI data centre can contain tens of thousands to hundreds of thousands of GPUs, each with up to 180 gigabytes of VRAM. Multiply that by 1,134, the current number of hyperscale data centres, and it is easy to see why Micron was eager to ditch the standard consumer market for more bankable opportunities.

The high demand for RAM has changed the ways manufacturers view risk and opportunity. AI infrastructure brings more volume, predictability, and stable contracts than consumer markets, especially during uncertain times and price swings. Even if some areas of AI do not meet long-term hopes, the need for memory in the near and medium term is built into data centre growth plans. For memory makers, shifting capacity to AI is a practical response to current market incentives, not just a risky bet on a single trend.

The aftermath of the RAM scarcity

The sudden price inflation and undersupply of RAM have affected more than just consumers building high-end gaming PCs and upgrading laptops. Memory components are critical to all types of devices, thereby affecting the prices of smartphones, tablets, TVs, game consoles, and many other IoT devices. To mitigate production costs and maintain profit margins, device manufacturers are tempted to offer their products with less RAM, resulting in substandard performance at the same price.

Businesses that rely on servers, cloud services, or data processing are also expected to get caught in the RAM crossfire. Higher IT costs are predicted to slow down software upgrades, digital services, and cybersecurity improvements. Every SaaS company, small or large, risks having its platforms overloaded or its customers’ data compromised.

Public institutions, such as schools, hospitals, and government agencies, will also have to bend backwards to cover higher hardware costs due to more expensive RAM. Operating on fixed budgets allows only so much wiggle room to purchase the required software and hardware, likely leading to delays in public digital projects and the continued use of outdated electronic equipment.

Man putting up missing posters with a picture of RAM memory sticks on them.

Rising memory costs also influence innovation and competition. When basic components become more expensive, it is harder for new companies to enter the market or scale up their services. This can favour large, well-funded firms and reduce diversity in the tech ecosystem. Finally, higher RAM prices can indirectly affect digital access and inclusion. More expensive devices and services make it harder for individuals and communities to afford modern technology, widening existing digital divides.

In short, when RAM becomes scarce or expensive, the effects extend far beyond memory pricing, influencing how digital services are accessed, deployed, and maintained across the economy. While continued investment in more capable AI models is a legitimate technological goal, it also raises a practical tension.

Advanced systems deliver limited value if the devices and infrastructure most people rely on lack the memory capacity required to run them efficiently. The challenge of delivering advanced AI models and AI-powered apps to subpar devices is one that AI developers will have to take into account moving forward. After all, what good is a state-of-the-art LLM if a run-of-the-mill PC or smartphone lacks the RAM to handle it?

The road ahead for RAM supply and pricing

As mentioned earlier, some memory component manufacturers predict that the RAM shortage will remain a burr under consumers’ saddles for at least a few years. Pompous predictions of the AI bubble’s imminent bursting have mostly ended up in the ‘I’ll believe it when I see it’ archive section, across the hall from the ‘NFTs are the future of digital ownership’ district.

Should investments continue to fill the budgets of OpenAI, Perplexity, Anthropic, and the rest, they will have the resources to reinforce their R&D departments, acquire the necessary memory components, and further develop their digital infrastructure. In the long run, the technology powering AI models may become more sophisticated to the point where energy demands reach a plateau. In that case, opportunities for expansion would be limitless.

Even though one of the biggest RAM manufacturers has fully shifted to making AI infrastructure components, there is still a gap large enough to be filled by small- and medium-sized producers. Companies such as Nanya Technology from Taiwan or US-based Virtium hold a tenth of the overall market share, but they have been given the opportunity to carry Micron’s torch and maintain competitiveness in their own capacities.

The current RAM price crisis is not caused by a single event, but by the way new technologies are changing the foundations of the digital economy. As AI infrastructure takes up more of the global memory supply, higher prices and limited availability are likely to continue across consumer, business, and public-sector markets. How governments, manufacturers, and buyers respond will shape not only the cost of hardware but also how accessible and resilient digital systems remain.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

E-commerce transformation through blockchain technology

Understanding blockchain technology

Blockchain technology emerged from the 2008 Bitcoin white paper as a radical approach to storing and verifying information. A blockchain is a distributed ledger maintained across a decentralised network of computers.

Each participant holds a full or partial copy of the ledger, and each new record is grouped into a block that is linked to previous blocks through cryptographic hashing. The system ensures immutability because any alteration of a record demands the recalculation of every subsequent block.

That requirement becomes practically impossible when the ledger is distributed across thousands of nodes. Trust is achieved through consensus algorithms that validate transactions without a central authority.

The most widely used consensus mechanisms include Proof of Work and Proof of Stake. Both ensure agreement on transaction validity, although they differ significantly in computational intensity and energy consumption.

Encryption techniques and smart contracts provide additional features. Smart contracts operate as self-executing pieces of code recorded on a blockchain. Once agreed parameters are met, they automatically trigger actions such as payments or product releases.

Blockchain technology, therefore, functions not only as a secure ledger but as an autonomous execution environment for digital agreements.

The valuable property arises from decentralisation. Instead of relying on a single organisation to safeguard information, the system spreads responsibility and ownership across the network.

Fraud becomes more difficult, data availability improves, and censorship resistance increases. These characteristics attracted early adopters in finance, although interest soon expanded into supply chain management, healthcare, digital identity systems and electronic commerce.

The transparency, traceability and programmability of blockchain technology introduced new possibilities for verifying transactions, enforcing rules, and reducing dependencies on intermediaries. These properties made it appealing for online markets that require trust between large numbers of strangers.

Overview of major global e-commerce platforms

An e-commerce platform is a digital environment that enables businesses and individuals to buy and sell goods or services online. It provides essential functions such as product listings, payment processing, inventory management, customer support and logistics integration.

Instead of handling each function independently, sellers rely on the platform’s infrastructure to reach customers, manage transactions and ensure secure and reliable delivery.

E-commerce platforms have evolved rapidly over the last two decades and now operate as global digital ecosystems. Companies such as Amazon, Alibaba, eBay, Shopify, and Mercado Libre dominate much of the global market.

shopper using computer laptop input order with trolley credit card delivery truck online shopping ecommerce technology concept

Each platform has built its success on efficient logistics, secure payment systems, powerful search technologies, recommendation algorithms and extensive third-party seller networks. Yet each platform depends on centralised data systems that assign authority to the platform operator.

Amazon functions as an all-in-one marketplace, logistics provider, and cloud infrastructure supplier. Sellers rely on Amazon for product storage, fulfilment, payments, advertising and customer trust.

The centralised structure enables Amazon to deliver high service reliability and instant refunds, while granting Amazon significant control over pricing, competition and data.

Alibaba operates a two-tiered system with Alibaba.com serving business-to-business (B2B) trade and AliExpress catering to international consumers. Its platforms rely on Alipay for secure transactions and on vast networks of Chinese suppliers.

Alibaba uses an AI-driven tool to manage inventory, fraud detection and personalised recommendations. The centralised model allows for strong coordination across sellers and logistics partners, although concerns often arise around counterfeits and data visibility.

eBay uses an auction and fixed-price model that supports both personal resales and professional merchants. It depends heavily on reputation systems and buyer protection schemes.

Dispute resolution and payment management were traditionally run through PayPal, later reintegrated into eBay’s own system. Although decentralised in terms of sellers, eBay remains centralised in its enforcement and decision-making.

Shopify functions as an infrastructure provider rather than a marketplace. Merchants build their own shops using Shopify’s tools, integrate third-party apps and manage independent payment gateways through Shopify Payments.

Although more decentralised on the surface, Shopify still holds the core infrastructure and retains ultimate authority over store policies.

Across all major e-commerce platforms, centralisation creates efficiency, but it also produces trust bottlenecks. Buyers depend on the platform operator to verify sellers, protect funds and manage refunds. Sellers depend on the operator for traffic, transaction processing and dispute management.

Power inequalities emerge because the platform controls data flows and marketplace rules. That environment encourages exploration of blockchain-based alternatives that seek to distribute trust, reduce intermediaries and automate verification.

How blockchain technology intersects with e-commerce

The relationship between blockchain technology and e-commerce can be divided into several major areas that reflect attempts to solve persistent problems within online marketplaces. Each area demonstrates how decentralised technology is reshaping trust and coordination instead of relying on central authorities.

Let’s dive into some examples.

Payments and digital currencies

The earliest impact arose from blockchain-based digital currencies. Platforms such as Overstock and Shopify began accepting Bitcoin and other cryptocurrencies as alternative payment methods.

bitcoin keyboard

Acceptance was driven by lower transaction fees compared to credit card networks, the elimination of chargebacks and faster cross-border payments. Buyers gained autonomy by being able to transact without banks, while sellers reduced exposure to fraudulent chargebacks.

Stablecoins further extended the utility of blockchain payments by reducing volatility through pegs to traditional currencies. Platforms started experimenting with stablecoin settlements that allow rapid international payments without the delays or costs of traditional banking infrastructure.

For cross-border commerce, stablecoins offer a major advantage because buyers and sellers located in different financial systems can transact directly.

While integration remains limited across mainstream platforms, blockchain wallets and cryptocurrency gateways illustrate how decentralised finance can complement e-commerce rather than replacing it.

Major challenges include regulatory uncertainty, fluctuating exchange rates, tax complexity and limited consumer familiarity.

Supply chain transparency and product authenticity

Blockchain technology provides auditable and immutable records that improve supply chain transparency. Companies such as Walmart, Carrefour and Alibaba have introduced blockchain-based tracking systems to verify product origins.

For high-value items including luxury goods, pharmaceuticals or speciality foods, authenticity is critical. A blockchain tracker records each stage of production and logistics from raw materials to retail delivery. Consumers can verify product history by scanning a QR code that accesses the ledger.

E-commerce platforms benefit because trust increases. Sellers find it easier to demonstrate the legitimacy of products, and counterfeit goods become easier to identify. Instead of depending solely on platform reputation systems, transparency is shifted to verifiable data that cannot be easily altered.

E-commerce, therefore, gains an additional trust layer through blockchain-backed provenance.

Decentralised marketplaces

A newer development involves decentralised e-commerce marketplaces built directly on blockchain networks. Platforms such as OpenBazaar, Origin Protocol, Boson Protocol and various Web3 retail experiments allow for peer-to-peer trade without central operators.

Smart contracts automate escrow, dispute handling, and payments. Buyers acquire goods by locking funds in a smart contract, sellers ship items and final confirmation releases payment.

The model reduces fees because no central operator takes commissions. Governance becomes community-driven through token-based voting. Control over seller data, reputation, and transactions is shared across the network instead of being held by a corporation.

Although adoption remains small compared to conventional platforms, decentralised marketplaces demonstrate how blockchain could transform current power structures in e-commerce.

Significant obstacles remain. Users must manage digital wallets, transaction costs fluctuate with network activity, and the user experience often feels less polished than that of mainstream platforms.

sending money paying online online shopping buying online online banking digital wallet mobile

Without strong brand recognition, trust formation is slower. Nevertheless, the model indicates how blockchain could enable marketplaces that operate without dominant intermediaries.

Smart contracts and automated commerce

Smart contracts provide automated enforcement of agreements. Within e-commerce, they can manage warranties, subscriptions, service renewals, loyalty rewards and escrow arrangements.

Instead of relying on human moderators, refund conditions or service obligations can be encoded into smart contracts that release payment only when the conditions are met.

Automated commerce extends further when smart contracts interact with Internet of Things devices. A connected device could autonomously purchase replacement parts or consumables when necessary.

E-commerce platforms could integrate smart contract logic to handle inventory restocking, supplier payments or automated compliance checks.

The special nature of smart contracts improves reliability because actions cannot be arbitrarily reversed by a platform operator. However, coding errors and rigidity create risks because smart contracts cannot easily adapt once deployed.

Governance frameworks such as decentralised autonomous organisations attempt to manage contract upgrades and dispute processes, although they remain experimental.

Tokenisation and loyalty systems

Blockchain technology also enables the tokenisation of loyalty points, vouchers and digital assets. Instead of centralised reward programmes that limit transferability, tokenised loyalty points can be traded, exchanged or used across multiple platforms.

Sellers gain marketing flexibility while buyers gain value portability.

E-commerce platforms have explored non-fungible tokens (NFTs) as digital certificates for physical goods, especially within luxury fashion, collectables and art-related markets. Instead of simple receipts, NFTs act as verifiable proof of ownership that can be transferred independently of the platform.

5409268

Although the market has experienced volatility, the experiment highlighted how blockchain can merge physical and digital commerce.

Data ownership and privacy

Centralised e-commerce collects extensive customer data, including purchasing behaviour, preferences and browsing patterns. Blockchain technology introduces alternative models where users hold their own data and selectively grant access through cryptographic permissions.

Instead of businesses accumulating large datasets, consumers become the custodians of their personal information.

Self-sovereign identity solutions allow users to verify age, location or reputation without exposing full personal profiles. This approach could reduce data breaches and strengthen privacy protection.

E-commerce platforms could integrate verification without storing sensitive information. Adoption remains limited, although interest is growing as data protection regulations increase.

Assessment of combined impact

The combination of blockchain technology and e-commerce represents a gradual shift toward decentralised trust models. Traditional platforms depend on central authorities to enforce rules, settle disputes, and secure transactions.

Blockchain introduces alternatives that distribute these responsibilities across networks and algorithms. The synergy creates several potential impacts.

Traceability and transparency improve product trust. Automated contracts reduce operational complexity. Decentralised payments shorten cross-border settlement times. Tokenisation creates new commercial models where digital and physical goods are tied to verifiable ownership.

Data ownership frameworks give buyers greater control over information. Taken together, these features increase resilience and reduce reliance on single intermediaries.

However, integration also encounters notable challenges. User experience remains a critical barrier because decentralised systems often require technical understanding. Regulatory frameworks for cryptocurrency payments, smart contract disputes and decentralised marketplace governance remain uncertain.

Crypto jurisdiction

Energy consumption concerns affect public perception, although newer blockchains use far more efficient consensus mechanisms. Large platforms may resist decentralisation because it reduces their control and revenue streams.

The most realistic pathway is hybrid rather than fully decentralised commerce. Mainstream marketplaces can incorporate blockchain features such as supply chain tracking, tokenised loyalty, and optional crypto payments while retaining central management for dispute resolution and customer support.

A combination like this delivers benefits without sacrificing the convenience of familiar interfaces.

Future outlook and complementary technologies

Blockchain technology will continue to shape e-commerce, although it will evolve alongside other technologies rather than acting alone. Several developments appear likely to influence the next decade of online commerce.

AI will integrate with blockchain to enhance fraud detection, automate dispute processes, and analyse supply chain data. Instead of opaque AI systems, blockchain can record decision rules or training data in transparent ways that improve accountability.

Internet of Things networks will use blockchain for device-to-device payments and micro-transactions. Connected appliances could automatically reorder supplies or arrange maintenance using autonomous smart contracts. A model that expands e-commerce from human-initiated purchases to machine-driven commerce.

Decentralised identity solutions will simplify verification for both buyers and sellers. Instead of uploading documents to multiple platforms, individuals will maintain portable digital identities controlled by cryptographic keys.

E-commerce platforms will verify the necessary attributes without storing personal information. Such an approach aligns with privacy regulations and reduces fraud.

Quantum-resistant cryptography will become essential as quantum computing advances. Blockchain networks will need upgrades to maintain security. E-commerce platforms built on blockchain will therefore rely on next-generation cryptographic systems.

AR and VR will integrate with blockchain through tokenised digital goods that move between immersive environments and real-world marketplaces.

medium shot man wearing vr glasses

Luxury brands already experiment with digital twins of physical products. That trend will only deepen as consumers spend more time in virtual spaces.

The future of e-commerce will not depend on a single technology. Instead of blockchain replacing conventional systems, it will act as a foundational layer that strengthens transparency, trust, and automation.

E-commerce platforms will selectively adopt decentralised features that complement their existing operations while retaining user-friendly interfaces and established logistics networks.

In conclusion, blockchain has reshaped expectations of trust within digital environments. Its decentralised architecture, immutability, and programmability have introduced new opportunities for secure payments, supply chain verification, automated agreements and data sovereignty.

E-commerce platforms recognised the potential and began integrating blockchain features to improve authenticity, reduce fraud and expand payment options. The combination offers a powerful pathway toward more transparent and efficient commerce.

Yet challenges remain as user experience, regulation and scalability continue to influence adoption. The future of our transactions is to be hybrid, with blockchain supporting specific components of e-commerce rather than replacing established models.

Complementary technologies, including AI, IoT, decentralised identity and quantum-resistant security, will reinforce these developments. E-commerce will evolve toward ecosystems where automation, transparency and user empowerment become standard expectations.

Blockchain technology will play a central role in that transformation, although its greatest impact will emerge through careful integration rather than radical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The fundamentals of AI

AI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise faces to virtual assistants that understand speech and recommendation engines that predict what we want to watch next, AI has become embedded in everyday life.

Behind this transformation lies a set of core principles, or the fundamentals of AI, which explain how machines learn, adapt, and perform tasks once considered the exclusive domain of humans.

At the heart of modern AI are neural networks, mathematical structures inspired by the human brain. They organise computation into layers of interconnected nodes, or artificial neurones, which process information and learn from examples.

Unlike traditional programming, where every rule must be explicitly defined, neural networks can identify patterns in data autonomously. The ability to learn and improve with experience underpins the astonishing capabilities of today’s AI.

Multi-layer perceptron networks

A neural network consists of multiple layers of interconnected neurons, not just a simple input and output layer. Each layer processes the data it receives from the previous layer, gradually building hierarchical representations.

In image recognition, early layers detect simple features, such as edges or textures, middle layers combine these into shapes, and later layers identify full objects, like faces or cars. In natural language processing, lower layers capture letters or words, while higher layers recognise grammar, context, and meaning.

Without multiple layers, the network would be shallow, limited in its ability to learn, and unable to handle complex tasks. Multi-layer, or deep networks, are what enable AI to perform sophisticated functions like autonomous driving, medical diagnosis, and language translation.

How mathematics drives artificial intelligence

 Blackboard, Text, Document, Mathematical Equation

The foundation of AI is mathematics. Without linear algebra, calculus, probability, and optimisation, modern AI systems would not exist. These disciplines allow machines to represent, manipulate, and learn from vast quantities of data.

Linear algebra allows inputs and outputs to be represented as vectors and matrices. Each layer of a neural network transforms these data structures, performing calculations that detect patterns in data, such as shapes in images or relationships between words in a sentence.

Calculus, especially the study of derivatives, is used to measure how small changes in a network’s parameters, called weights, affect its predictions. This information is critical for optimisation, which is the process of adjusting these weights to improve the network’s accuracy.

The loss function measures the difference between the network’s prediction and the actual outcome. It essentially tells the network how wrong it is. For example, the mean squared error measures the average squared difference between the predicted and actual values, while cross-entropy is used in classification tasks to measure how well the predicted probabilities match the correct categories.

Gradient descent is an algorithm that uses the derivative of the loss function to determine the direction and magnitude of changes to each weight. By moving weights gradually in the direction that reduces the loss, the network learns over time to make more accurate predictions.

Backpropagation is a method that makes learning in multi-layer neural networks feasible. Before its introduction in the 1980s, training networks with more than one or two layers was extremely difficult, as it was hard to determine how errors in the output layer should influence the earlier weights. Backpropagation systematically propagates this error information backwards through the network.

At its core, it applies the chain rule of calculus to compute gradients, indicating how much each weight contributes to the overall error and the direction it should be adjusted. Combined with gradient descent, this iterative process allows networks to learn hierarchical patterns, from simple edges in images to complex objects, or from letters to complete sentences.

Backpropagation has transformed neural networks from shallow, limited models into deep, powerful tools capable of learning sophisticated patterns and making human-like predictions.

Why neural network architecture matters

 Lighting, Light, Network

The arrangement of layers in a network, or its architecture, determines its ability to solve specific problems.

Activation functions introduce non-linearity, giving networks the ability to map complex, high-dimensional data. ReLU (Rectified Linear Unit), one of the most widely used activation functions, addresses critical training issues and enables deep networks to learn efficiently.

Convolutional neural networks (CNNs) excel in image and video analysis. By applying filters across images, CNNs detect local patterns like edges and textures. Pooling layers reduce spatial dimensions, making computation faster while preserving essential features. Local connectivity ensures neurones process only relevant input regions, mimicking human vision.

Recurrent neural networks (RNNs) and their variants, such as LSTMs and GRUs, process sequential data like text or audio. They maintain a hidden state that acts as memory, capturing dependencies over time, a crucial feature for tasks such as speech recognition or predictive text.

Transformer revolution and attention mechanisms

In 2017, AI research took a major leap with the introduction of Transformer models. Unlike RNNs, which process sequences step by step, transformers use attention mechanisms to evaluate all parts of the input simultaneously.

The attention mechanism calculates which elements in a sequence are most relevant to each output. Using linear algebra, it compares query, key, and value vectors to assign weights, highlighting important information and suppressing irrelevant details.

That approach enabled the creation of large language models (LLMs) such as GPT and BERT, capable of generating coherent text, answering questions, and translating languages with unprecedented accuracy.

Transformers reshaped natural language processing and have since expanded into areas such as computer vision, multimodal AI, and reinforcement learning. Their ability to capture long-range context efficiently illustrates the power of combining deep learning fundamentals with innovative architectures.

How does AI learn and generalise?

 Adult, Female, Person, Woman, Face, Head

One of the central challenges in AI is ensuring that networks learn meaningful patterns from data rather than simply memorising individual examples. The ability to generalise and apply knowledge learnt from one dataset to new, unseen situations is what allows AI to function reliably in the real world.

Supervised learning is the most widely used approach, where networks are trained on labelled datasets, with each input paired with a known output. The model learns to map inputs to outputs by minimising the difference between its predictions and the actual results.

Applications include image classification, where the system distinguishes cats from dogs, or speech recognition, where spoken words are mapped to text. The accuracy of supervised learning depends heavily on the quality and quantity of labelled data, making data curation critical for reliable performance.

Unsupervised learning, by contrast, works with unlabelled data and seeks to uncover hidden structures and patterns. Clustering algorithms, for instance, can group similar customer profiles in marketing, while dimensionality reduction techniques simplify complex datasets for analysis.

The paradigm enables organisations to detect anomalies, segment populations, and make informed decisions from raw data without explicit guidance.

Reinforcement learning allows machines to learn by interacting with an environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, the system is not told the correct action in advance; it discovers optimal strategies through trial and error.

That approach powers innovations in robotics, autonomous vehicles, and game-playing AI, enabling systems to learn long-term strategies rather than memorise specific moves.

A persistent challenge across all learning paradigms is overfitting, which occurs when a network performs exceptionally well on training data but fails to generalise to new examples. Techniques such as dropout, which temporarily deactivate random neurons during training, encourage the network to develop robust, redundant representations.

Similarly, weight decay penalises excessively large parameter values, preventing the model from relying too heavily on specific features. Achieving proper generalisation is crucial for real-world applications: self-driving cars must correctly interpret new road conditions, and medical AI systems must accurately assess patients with cases differing from the training dataset.

By learning patterns rather than memorising data, AI systems become adaptable, reliable, and capable of making informed decisions in dynamic environments.

The black box problem and explainable AI (XAI)

 Animal, Nature, Outdoors, Reef, Sea, Sea Life, Water, Pattern, Coral Reef

Deep learning and other advanced AI technologies rely on multi-layer neural networks that can process vast amounts of data. While these networks achieve remarkable accuracy in image recognition, language translation, and decision-making, their complexity often makes it extremely difficult to explain why a particular prediction was made. That phenomenon is known as the black box problem.

Though these systems are built on rigorous mathematical principles, the interactions between millions or billions of parameters create outputs that are not immediately interpretable. For instance, a healthcare AI might recommend a specific diagnosis, but without interpretability tools, doctors may not know what features influenced that decision.

Similarly, in finance or law, opaque models can inadvertently perpetuate biases or produce unfair outcomes.

Explainable AI (XAI) seeks to address this challenge. By combining the mathematical and structural fundamentals of AI with transparency techniques, XAI allows users to trace predictions back to input features, assess confidence, and identify potential errors or biases.

In practice, this means doctors can verify AI-assisted diagnoses, financial institutions can audit credit decisions, and policymakers can ensure fair and accountable deployment of AI.

Understanding the black box problem is therefore essential not only for developers but for society at large. It bridges the gap between cutting-edge AI capabilities and trustworthy, responsible applications, ensuring that as AI systems become more sophisticated, they remain interpretable, safe, and beneficial.

Data and computational power

 Electronics, Hardware, Computer, Server, Architecture, Building, Computer Hardware, Monitor, Screen

Modern AI depends on two critical ingredients: large, high-quality datasets and powerful computational resources. Data provides the raw material for learning, allowing networks to identify patterns and generalise to new situations.

Image recognition systems, for example, require millions of annotated photographs to reliably distinguish objects, while language models like GPT are trained on billions of words from books, articles, and web content, enabling them to generate coherent, contextually aware text.

High-performance computation is equally essential. Training deep neural networks involves performing trillions of calculations, a task far beyond the capacity of conventional processors.

Graphics Processing Units (GPUs) and specialised AI accelerators enable parallel processing, reducing training times from months to days or even hours. This computational power enables real-time applications, such as self-driving cars interpreting sensor data instantly, recommendation engines adjusting content dynamically, and medical AI systems analysing thousands of scans within moments.

The combination of abundant data and fast computation also brings practical challenges. Collecting representative datasets requires significant effort and careful curation to avoid bias, while training large models consumes substantial energy.

Researchers are exploring more efficient architectures and optimisation techniques to reduce environmental impact without sacrificing performance.

The future of AI

 Body Part, Finger, Hand, Person, Clothing, Glove, Electronics, Hardware

The foundations of AI continue to evolve rapidly, driven by advances in algorithms, data availability, and computational power. Researchers are exploring more efficient architectures, capable of learning from smaller datasets while maintaining high performance.

For instance, self-supervised learning allows a model to learn from unlabelled data by predicting missing information within the data itself, while few-shot learning enables a system to understand a new task from just a handful of examples. These methods reduce the need for enormous annotated datasets and make AI development faster and more resource-efficient.

Transformer models, powered by attention mechanisms, remain central to natural language processing. The attention mechanism allows the network to focus on the most relevant parts of the input when making predictions.

For example, when translating a sentence, it helps the model determine which words are most important for understanding the meaning. Transformers have enabled the creation of large language models like GPT and BERT, capable of summarising documents, answering questions, and generating coherent text.

Beyond language, multimodal AI systems are emerging, combining text, images, and audio to understand context across multiple sources. For instance, a medical AI system might analyse a patient’s scan while simultaneously reading their clinical notes, providing more accurate and context-aware insights.

Ethics, transparency, and accountability remain critical. Explainable AI (XAI) techniques help humans understand why a model made a particular decision, which is essential in fields like healthcare, finance, and law. Detecting bias, evaluating fairness, and ensuring that models behave responsibly are becoming standard parts of AI development.

Energy efficiency and sustainability are also priorities, as training large models consumes significant computational resources.

Ultimately, the future of AI will be shaped by models that are not only more capable but also more efficient, interpretable, and responsible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gaming and Esports: A new frontier in diplomacy

From playrooms to global arenas

Video games have long since outgrown their roots as niche entertainment. What used to be arcades and casual play is now a global cultural phenomenon.

A recent systematic review of research argues that video games play a powerful role in cultural transmission. They allow players worldwide, regardless of language or origin, to absorb cultural, social, and historical references embedded in game narratives.

Importantly, games are not passive media. Their interactivity gives them unique persuasive power. As one academic work on ‘gaming in diplomacy’ puts it, video games stand out among cultural media because they allow for procedural rhetoric, meaning that players learn values, norms, and worldviews not just by watching or hearing, but by actively engaging with them.

As such, gaming has the capacity to transcend borders, languages and traditional media’s constraints. For many young players around the world, including those in developing regions, gaming has become a shared language, a means to connecting across cultures, geographies, and generations.

Esports as soft power and public diplomacy

Nation branding, cultural export and global influence

Several countries have recognised the diplomatic potential of esports and gaming. Waseda University researchers emphasise that esports can be systematically used to project soft power, engaging foreign publics, shaping favourable perceptions, and building cultural influence, rather than being mere entertainment or economic ventures.

A 2025 study shows that the use of ‘game-based cultural diplomacy’ is increasingly common. Countries such as Japan, Poland, and China are utilising video games and associated media to promote their national identity, cultural narratives, and values.

An article about the games Honor of Kings and Black Myth: Wukong describes how the state-backed Chinese gaming industry incorporates traditional Chinese cultural elements (myth, history, aesthetics) into globally consumed games, thereby reaching millions internationally and strengthening China’s soft-power footprint.

For governments seeking to diversify their diplomatic tools beyond traditional media (film, music, diplomatic campaigns), esports offers persistent, globally accessible, and youth-oriented engagement, particularly as global demographics shift toward younger, digital-native generations.

Esports diplomacy in practice: People-to-people exchange

Cross-cultural understanding, community, identity

In bilateral diplomacy, esports has already been proposed as a vehicle for ‘people-to-people exchange.’ For example, a commentary on US–South Korea relations argues that annual esports competitions between the two countries’ top players could serve as a modern, interactive form of public diplomacy, fostering mutual cultural exchange beyond the formalities of traditional diplomacy.

On the grassroots level, esports communities, being global, multilingual and cross-cultural, foster friendships, shared experiences, and identities that transcend geography. That moment democratises participation, because you don’t need diplomatic credentials or state backing. All you need is access and engagement.

Some analyses emphasise how digital competition and community-building in esports ‘bridge cultural differences, foster international collaboration and cultural diversity through shared language and competition.’

Esport

From a theoretical perspective, applying frameworks from sports diplomacy to esports, supported by academic proposals, offers a path to sustainable and legitimate global engagement through gaming, if regulatory, equality and governance challenges are addressed.

Tensions & challenges: Not just a soft-power fairy tale

Risk of ‘techno-nationalism’ and propaganda

The use of video games in diplomacy is not purely benign. Some scholars warn of ‘digital nationalism’ or ‘techno-nationalism,’ where games become tools for propagating state narratives, shaping collective memory, and exporting political or ideological agendas.

The embedding of cultural or historical motifs in games (mythology, national heritage, symbols) can blur the line between cultural exchange and political messaging. While this can foster appreciation for a culture, it may also serve more strategic geopolitical or soft-power aims.

From a governance perspective, the rapid growth of esports raises legitimate concerns about inequality (access, digital divide), regulation, legitimacy of representation (who speaks for ‘a nation’), and possible exploitation of youth. Some academic literature argues that without proper regulation and institutionalisation, the ‘esports diplomacy gold rush’ risks being unsustainable.

Why this matters and what it means for Africa and the Global South

For regions such as Africa, gaming and esports represent not only recreation but potential platforms for youth empowerment, cultural expression, and international engagement. Even where traditional media or sports infrastructure may be limited, digital games can provide global reach and visibility. That aligns with the idea of ‘future pathways’ for youth, which includes creativity, community-building and cross-cultural exchange.

Because games can transcend language and geography, they offer a unique medium for diaspora communities, marginalised youth, and underrepresented cultures to project identity, share stories, and engage with global audiences. In that sense, gaming democratises cultural participation and soft-power capabilities.

On a geopolitical level, as game-based diplomacy becomes more recognised, Global South countries may leverage it to assert soft power, attract investment, and promote tourism or cultural heritage, provided they build local capacity (developers, esports infrastructure, regulation) and ensure inclusive access.

Gaming & esports as emerging diplomatic infrastructure

The trend suggests that video games and esports are steadily being institutionalised as instruments of digital diplomacy, soft power, and cultural diplomacy, not only by private companies, but increasingly by states and policymakers. Academic bibliometric analysis shows a growing number of studies (2015–2024) dedicated to ‘game-based cultural diplomacy.’

As esports ecosystems grow, with tournaments, global fans and the cultural export, we may see a shift from occasional ‘cultural-diplomacy events’ to sustained, long-term strategies employing gaming to shape international perceptions, build transnational communities, and influence foreign publics.

Gaming PC

However, for this potential to be realised responsibly, key challenges must be addressed. Those challenges include inequality of access (digital divide), transparency over cultural or political messaging, fair regulation, and safeguarding inclusivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

What the Cloudflare outage taught us: Tracing ones that shaped the internet of today

The internet has become part of almost everything we do. It helps us work, stay in touch with friends and family, buy things, plan trips, and handle tasks that would have felt impossible until recently. Most people cannot imagine getting through the day without it.

But there is a hidden cost to all this convenience. Most of the time, online services run smoothly, with countless systems working together in the background. But every now and then, though, a key cog slips out of place.

When that happens, the effects can spread fast, taking down apps, websites, and even entire industries within minutes. These moments remind us how much we rely on digital services, and how quickly everything can unravel when something goes wrong. It raises an uncomfortable question. Is digital dependence worth the convenience, or are we building a house of cards that could collapse, pulling us back into reality?

Warning shots of the dot-com Era and the infancy of Cloud services

In its early years, the internet saw several major malfunctions that disrupted key online services. Incidents like the Morris worm in 1988, which crashed about 10 percent of all internet-connected systems, and the 1996 AOL outage that left six million users offline, revealed how unprepared the early infrastructure was for growing digital demand.

A decade later, the weaknesses were still clear. In 2007, Skype, then with over 270 million users, went down for nearly two days after a surge in logins triggered by a Windows update overwhelmed its network. Since video calls were still in their early days, the impact was not as severe, and most users simply waited it out, postponing chats with friends and family until the issue was fixed.

As the dot-com era faded and the 2010s began, the shift to cloud computing introduced a new kind of fragility. When Amazon’s EC2 and EBS systems in the US-East region went down in 2011, the outage took down services like Reddit, Quora, and IMDb for days, exposing how quickly failures in shared infrastructure can cascade.

A year later, GoDaddy’s DNS failure took millions of websites offline, while large-scale Gmail disruptions affected users around the world, early signs that the cloud’s growing influence came with increasingly high stakes.

By the mid-2010s, it was clear that the internet had evolved from a patchwork of standalone services to a heavily interconnected ecosystem. When cloud or DNS providers stumbled, their failures rippled simultaneously across countless platforms. The move to centralised infrastructure made development faster and more accessible, but it also marked the beginning of an era where a single glitch could shake the entire web.

Centralised infrastructure and the age of cascading failures

The late 2000s and early 2010s saw a rapid rise in internet use, with nearly 2 billion people worldwide online. As access grew, more businesses moved into the digital space, offering e-commerce, social platforms, and new forms of online entertainment to a quickly expanding audience.

With so much activity shifting online, the foundation beneath these services became increasingly important, and increasingly centralised, setting the stage for outages that could ripple far beyond a single website or app.

The next major hit came in 2016, when a massive DDoS attack crippled major websites across the USA and Europe. Platforms like Netflix, Reddit, Twitter, and CNN were suddenly unreachable, not because they were directly targeted, but because Dyn, a major DNS provider, had been overwhelmed.

The attack used the Mirai botnet malware to hijack hundreds of thousands of insecure IoT devices and flood Dyn’s servers with traffic. It was one of the clearest demonstrations yet that knocking out a single infrastructure provider could take down major parts of the internet in one stroke.

In 2017, another major outage occurred, with Amazon at the centre once again. On 28 February, the company’s Simple Storage Service (S3) went down for about 4 hours, disrupting access across a large part of the US-EAST-1 region. While investigating a slowdown in the billing system, an Amazon engineer accidentally entered a typo in a command, taking more servers offline than intended.

That small error was enough to knock out services like Slack, Quora, Coursera, Expedia and countless other websites that relied on S3 for storage or media delivery. The financial impact was substantial; S&P 500 companies alone were estimated to have lost roughly 150 million dollars during the outage.

Amazon quickly published a clear explanation and apology, but transparency could not undo the economic damage nor (yet another) sudden reminder that a single mistake in a centralised system could ripple across the entire web.

Outages in the roaring 2020s

The S3 incident made one thing clear. Outages were no longer just about a single platform going dark. As more services leaned on shared infrastructure, even small missteps could take down enormous parts of the internet. And this fragility did not stop at cloud storage.

Over the next few years, attention shifted to another layer of the online ecosystem: content delivery networks and edge providers that most people had never heard of but that nearly every website depended on.

The 2020s opened with one of the most memorable outages to date. On 4 October 2021, Facebook and its sister platforms, Instagram, WhatsApp, and Messenger, vanished from the internet for nearly 7 hours after a faulty BGP configuration effectively removed the company’s services from the global routing table.

Millions of users flocked to other platforms to vent their frustration, overwhelming Twitter, Telegram, Discord, and Signal’s servers and causing performance issues across the board. It was a rare moment when a single company’s outage sent measurable shockwaves across the entire social media ecosystem.

But what happens when outages hit industries far more essential than social media? In 2023, the Federal Aviation Administration was forced to delay more than 10,000 flights, the first nationwide grounding of air traffic since the aftermath of September 11.

A corrupted database file brought the agency’s Notice to Air Missions (NOTAM) system to a standstill, leaving pilots without critical safety updates and forcing the entire aviation network to pause. The incident sent airline stocks dipping and dealt another blow to public confidence, showing just how disruptive a single technical failure can be when it strikes at the heart of critical infrastructure.

Outages that defined 2025

The year 2025 saw an unprecedented wave of outages, with server overloads, software glitches and coding errors disrupting services across the globe. The Microsoft 365 suite outage in January, the Southwest Airlines and FAA synchronisation failure in April, and the Meta messaging blackout in July all stood out for their scale and impact.

But the most disruptive failures were still to come. In October, Amazon Web Services suffered a major outage in its US-East-1 region, knocking out everything from social apps to banking services and reminding the world that a fault in a single cloud region can ripple across thousands of platforms.

Just weeks later, the Cloudflare November outage became the defining digital breakdown of the year. A logic bug inside its bot management system triggered a cascading collapse that took down social networks, AI tools, gaming platforms, transit systems and countless everyday websites in minutes. It was the clearest sign yet that when core infrastructure falters, the impact is immediate, global and largely unavoidable.

And yet, we continue to place more weight on these shared foundations, trusting they will hold because they usually do. Every outage, whether caused by a typo, a corrupted file, or a misconfigured update, exposes how quickly things can fall apart when one key piece gives way.

Going forward, resilience needs to matter as much as innovation. That means reducing single points of failure, improving transparency, and designing systems that can fail without dragging everything down. The more clearly we see the fragility of the digital ecosystem, the better equipped we are to strengthen it.

Outages will keep happening, and no amount of engineering can promise perfect uptime. But acknowledging the cracks is the first step toward reinforcing what we’ve built — and making sure the next slipped cog does not bring the whole machine to a stop.

The smoke and mirrors of the digital infrastructure

The internet is far from destined to collapse, but resilience can no longer be an afterthought. Redundancy, decentralisation and smarter oversight need to be part of the discussion, not just for engineers, but for policymakers as well.

Outages do not just interrupt our routines. They reveal the systems we have quietly built our lives around. Each failure shows how deeply intertwined our digital world has become, and how fast everything can stop when a single piece gives way.

Will we learn enough from each one to build a digital ecosystem that can absorb the next shock instead of amplifying it? Only time will tell.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!