How AI agents are quietly rebuilding the foundations of the global economy 

AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift in how businesses and institutions approach automation and decision-making.

Market forecasts suggest that 2026 and the years ahead will bring an even larger boom in AI agents, driven by massive global investment and expanding real-world deployment. As a result, AI agents are increasingly viewed as a foundational layer of the next phase of the digital economy.

Computer, Electronics, Laptop, Pc, Hardware, Computer Hardware, Monitor, Screen, Computer Keyboard, Body Part, Hand, Person

What are AI agents, and why do they matter

AI agents are autonomous software systems designed to perceive information, make decisions, and act independently to achieve specific goals. Unlike traditional AI applications or conventional AI tools, which respond to prompts or perform single functions and often require direct supervision, AI agents are proactive and operate across multiple domains.

They can plan, adapt, and coordinate various steps across workflows, anticipating needs, prioritising tasks, and collaborating with other systems or agents without constant human intervention.

As a result, AI agents are not just incremental upgrades to existing software; they represent a fundamental change in how organisations leverage technology. By taking ownership of complex processes and decision-making workflows, AI agents enable businesses to operate at scale, adapt more rapidly to change, and unlock opportunities that were previously impossible with traditional AI tools alone. 

They fundamentally change how AI is applied in enterprise environments, moving from task automation to outcome-driven execution. 

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Why AI agents became a breakout trend in 2025

Several factors converged in 2025 to push AI agents into the mainstream. Advances in large language models, improved reasoning capabilities, and lower computational costs made agent-based systems commercially viable. At the same time, enterprises faced growing pressure to increase efficiency amid economic uncertainty and labour constraints. 

The fact is that AI agents gained traction not because of their theoretical promise, but because they delivered measurable results. Companies deploying AI agents reported faster execution, lower operational overhead, and improved scalability across departments. As adoption accelerated, AI agents became one of the most visible indicators of where new technology was heading next.

 Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Global investment is accelerating the AI agents boom

Investment trends underline the strategic importance of AI agents. Venture capital firms, technology giants, and state-backed innovation funds are allocating significant capital to agent-based platforms, orchestration frameworks, and AI infrastructure. These investments are not experimental in nature; they reflect long-term bets on autonomous systems as core business infrastructure.

Large enterprises are committing internal budgets to AI agent deployment, often integrating them directly into mission-critical operations. As funding flows into both startups and established players, competition is intensifying, further accelerating innovation and adoption across global markets. 

The AI agents market is projected to surge from approximately $7.92 billion in 2025 to surpass $236 billion by 2034, driven by a compound annual growth rate (CAGR) exceeding 45%.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Where AI agents are already being deployed at scale

Agent-based systems are no longer limited to experimental use, as adoption at scale is taking shape across various industries. In finance, AI agents manage risk analysis, fraud detection, reporting workflows, and internal compliance processes. Their ability to operate continuously and adapt to changing data makes them particularly effective in data-intensive environments.

In business operations, AI agents are transforming customer support, sales operations, procurement, and supply chain management. Autonomous agents handle inquiries, optimise pricing strategies, and coordinate logistics with minimal supervision.

One of the clearest areas of AI agent influence is software development, where teams are increasingly adopting autonomous systems for code generation, testing, debugging, and deployment. These systems reduce development cycles and allow engineers to focus on higher-level design and architecture. It is expected that by 2030, around 70% of developers will work alongside autonomous AI agents, shifting human roles toward planning, design, and orchestration.

Healthcare, research, and life sciences are also adopting AI agents for administrative automation, data analysis, and workflow optimisation, freeing professionals from repetitive tasks and improving operational efficiency.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

The economic impact of AI agents on global productivity

The broader economic implications of AI agents extend far beyond individual companies. At scale, autonomous AI systems have the potential to boost global productivity by eliminating structural inefficiencies across various industries. By automating complex, multi-step processes rather than isolated tasks, AI agents compress decision timelines, lower transaction costs, and remove friction from business operations.

Unlike traditional automation, AI agents operate across entire workflows in real time. It enables organisations to respond more quickly to market changes and shifts in demand, thereby increasing operational agility and efficiency at a systemic level.

Labour markets will also evolve as agent-based systems become embedded in daily operations. Routine and administrative roles are likely to decline, while demand will rise for skills related to oversight, workflow design, governance, and strategic management of AI-driven operations. Human value is expected to shift toward planning, judgement, and coordination. 

Countries and companies that successfully integrate autonomous AI into their economic frameworks are likely to gain structural advantages in terms of efficiency and growth, while those that lag behind risk falling behind in an increasingly automated global economy.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

AI agents and the future evolution of AI 

The momentum behind AI agents shows no signs of slowing. Forecasts indicate that adoption will expand rapidly in 2026 as costs decline, standards mature, and regulatory clarity improves. For organisations, the strategic question is no longer whether AI agents will become mainstream, but how quickly they can be integrated responsibly and effectively. 

As AI agents mature, their influence will extend beyond business operations to reshape global economic structures and societal norms. They will enable entirely new industries, redefine the value of human expertise, and accelerate innovation cycles, fundamentally altering how economies operate and how people interact with technology in daily life. 

The widespread integration of AI agents will also reshape the world we know. From labour markets to public services, education, and infrastructure, societies will experience profound shifts as humans and autonomous systems collaborate more closely.

Companies and countries that adopt these technologies strategically will gain a structural advantage, while those that lag behind risk falling behind in both economic and social innovation.

Ultimately, AI agents are not just another technological advancement; they are becoming a foundational infrastructure for the future economy. Their autonomy, intelligence, and scalability position them to influence how value is created, work is organised, and global markets operate, marking a turning point in the evolution of AI and its role in shaping the modern world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘All is fair in RAM and war’: RAM price crisis in 2025 explained

If you are piecing together a new workstation or gaming rig, or just hunting for extra RAM or SSD storage, you have stumbled into the worst possible moment. With GPU prices already sky-high, the recent surge in RAM and storage costs has hit consumers hard, leaving wallets lighter and sparking fresh worries about where the tech market is headed.

On the surface, the culprit behind these soaring prices is a sudden RAM shortage. Prices for 32GB and 64GB sticks have skyrocketed by as much as 600 percent, shelves are emptying fast, and the balance between supply and demand has completely unraveled.

But blaming the sky-high prices on empty shelves only tells part of the story. Why has affordable RAM vanished? How long will this chaos last? And most intriguingly, what role does AI play in this pricing storm?

Tracing the causes of RAM pricing spikes

The US tariffs imposed on China on 1 August 2025, played a substantial role in the increase in DRAM prices. Global imports of various goods have become more costly, investments and workforce onboarding have been halted, and many businesses relying on imports have adopted a ‘wait-and-see’ approach to how they will do business going forward.

However, the worst was yet to come. On 3 December, Micron, one of the world’s leading manufacturers of data storage and computer memory components, announced its withdrawal from the RAM consumer market, citing a ‘surge in demand for memory and storage’ driven by supply shortages of memory and storage for AI data centres.

With Micron out of the picture, we are left with only two global consumer RAM and high-bandwidth memory (HBM) manufacturers: Samsung and SK Hynix. While there are countless RAM brands on the market, with Corsair, Kingston, and Crucial leading the charge, all of them rely on the three aforementioned suppliers for memory chips.

Micron’s exit was likely met with obscured glee by Samsung and SK Hynix of South Korea, who seized the opportunity to take over Crucial’s surrendered territory and set the stage for their DRAM/HBM supply duel. The latter supplier was quick to announce the completion of its M15X semiconductor fabrication plant (fab), but warned that RAM supply constraints are likely to last until 2028 at the earliest.

Amid the ruckus, rumours surfaced that Samsung would be sunsetting its SATA SSD production, which the company quickly extinguished. On the contrary, the Korean giant announced its intention to dethrone SK Hynix as the top global RAM provider, with more than 80 percent of its projected profits coming directly from Samsung Electronics.

Despite their established market shares, both enterprises were caught off guard when their main rival threw in the towel, and their production facilities are unable, at current capacity, to accommodate the resulting market void. It is nigh certain that the manufacturers will use their newly gained market dominance to their advantage, setting prices based on their profit margins and customers’ growing demand for their products. In a nutshell, they have the baton, and we must play to their tune.

AI infrastructure and the reallocation of RAM supply

Micron, deeming commodity RAM a manufacturing inconvenience, made a move that was anything but rash. In October, Samsung and SK Hynix joined forces with OpenAI to supply the AI giant with a monthly batch of 900,000 DRAM wafers. OpenAI’s push to enhance its AI infrastructure and development was presumably seen by Micron as a gauntlet thrown by its competitors, and Crucial’s parent company took no time in allocating its forces to a newly opened front.

Lured by lucrative, long-term, high-volume contracts, all three memory suppliers saw AI as an opportunity to open new income streams that would not dry up for years to come. While fears of the AI bubble bursting are omnipresent and tangible, neither Samsung, SK Hynix, nor Micron are overly concerned about what the future holds for LLMs and AGI, as long as they continue to get their RAM money’s worth (literally).

AI has expanded across multiple industries, and three competitors judged Q4 2025 the opportune time to put all their RAM eggs in one basket. AI as a business model has yet to reach profitability, but corporate investors poured more than USD 250 billion into AI in 2024 alone. Predictions for 2025 have surpassed the USD 500 billion mark, but financiers will inevitably grow more selective as the AI startup herd thins and predicted cash cows fail to deliver future profits.

To justify massive funding rounds, OpenAI, Microsoft, Google, and other major AI players need to keep their LLMs in a perpetual growth cycle by constantly expanding their memory capacity. A hyperscale AI data centre can contain tens of thousands to hundreds of thousands of GPUs, each with up to 180 gigabytes of VRAM. Multiply that by 1,134, the current number of hyperscale data centres, and it is easy to see why Micron was eager to ditch the standard consumer market for more bankable opportunities.

The high demand for RAM has changed the ways manufacturers view risk and opportunity. AI infrastructure brings more volume, predictability, and stable contracts than consumer markets, especially during uncertain times and price swings. Even if some areas of AI do not meet long-term hopes, the need for memory in the near and medium term is built into data centre growth plans. For memory makers, shifting capacity to AI is a practical response to current market incentives, not just a risky bet on a single trend.

The aftermath of the RAM scarcity

The sudden price inflation and undersupply of RAM have affected more than just consumers building high-end gaming PCs and upgrading laptops. Memory components are critical to all types of devices, thereby affecting the prices of smartphones, tablets, TVs, game consoles, and many other IoT devices. To mitigate production costs and maintain profit margins, device manufacturers are tempted to offer their products with less RAM, resulting in substandard performance at the same price.

Businesses that rely on servers, cloud services, or data processing are also expected to get caught in the RAM crossfire. Higher IT costs are predicted to slow down software upgrades, digital services, and cybersecurity improvements. Every SaaS company, small or large, risks having its platforms overloaded or its customers’ data compromised.

Public institutions, such as schools, hospitals, and government agencies, will also have to bend backwards to cover higher hardware costs due to more expensive RAM. Operating on fixed budgets allows only so much wiggle room to purchase the required software and hardware, likely leading to delays in public digital projects and the continued use of outdated electronic equipment.

Man putting up missing posters with a picture of RAM memory sticks on them.

Rising memory costs also influence innovation and competition. When basic components become more expensive, it is harder for new companies to enter the market or scale up their services. This can favour large, well-funded firms and reduce diversity in the tech ecosystem. Finally, higher RAM prices can indirectly affect digital access and inclusion. More expensive devices and services make it harder for individuals and communities to afford modern technology, widening existing digital divides.

In short, when RAM becomes scarce or expensive, the effects extend far beyond memory pricing, influencing how digital services are accessed, deployed, and maintained across the economy. While continued investment in more capable AI models is a legitimate technological goal, it also raises a practical tension.

Advanced systems deliver limited value if the devices and infrastructure most people rely on lack the memory capacity required to run them efficiently. The challenge of delivering advanced AI models and AI-powered apps to subpar devices is one that AI developers will have to take into account moving forward. After all, what good is a state-of-the-art LLM if a run-of-the-mill PC or smartphone lacks the RAM to handle it?

The road ahead for RAM supply and pricing

As mentioned earlier, some memory component manufacturers predict that the RAM shortage will remain a burr under consumers’ saddles for at least a few years. Pompous predictions of the AI bubble’s imminent bursting have mostly ended up in the ‘I’ll believe it when I see it’ archive section, across the hall from the ‘NFTs are the future of digital ownership’ district.

Should investments continue to fill the budgets of OpenAI, Perplexity, Anthropic, and the rest, they will have the resources to reinforce their R&D departments, acquire the necessary memory components, and further develop their digital infrastructure. In the long run, the technology powering AI models may become more sophisticated to the point where energy demands reach a plateau. In that case, opportunities for expansion would be limitless.

Even though one of the biggest RAM manufacturers has fully shifted to making AI infrastructure components, there is still a gap large enough to be filled by small- and medium-sized producers. Companies such as Nanya Technology from Taiwan or US-based Virtium hold a tenth of the overall market share, but they have been given the opportunity to carry Micron’s torch and maintain competitiveness in their own capacities.

The current RAM price crisis is not caused by a single event, but by the way new technologies are changing the foundations of the digital economy. As AI infrastructure takes up more of the global memory supply, higher prices and limited availability are likely to continue across consumer, business, and public-sector markets. How governments, manufacturers, and buyers respond will shape not only the cost of hardware but also how accessible and resilient digital systems remain.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

E-commerce transformation through blockchain technology

Understanding blockchain technology

Blockchain technology emerged from the 2008 Bitcoin white paper as a radical approach to storing and verifying information. A blockchain is a distributed ledger maintained across a decentralised network of computers.

Each participant holds a full or partial copy of the ledger, and each new record is grouped into a block that is linked to previous blocks through cryptographic hashing. The system ensures immutability because any alteration of a record demands the recalculation of every subsequent block.

That requirement becomes practically impossible when the ledger is distributed across thousands of nodes. Trust is achieved through consensus algorithms that validate transactions without a central authority.

The most widely used consensus mechanisms include Proof of Work and Proof of Stake. Both ensure agreement on transaction validity, although they differ significantly in computational intensity and energy consumption.

Encryption techniques and smart contracts provide additional features. Smart contracts operate as self-executing pieces of code recorded on a blockchain. Once agreed parameters are met, they automatically trigger actions such as payments or product releases.

Blockchain technology, therefore, functions not only as a secure ledger but as an autonomous execution environment for digital agreements.

The valuable property arises from decentralisation. Instead of relying on a single organisation to safeguard information, the system spreads responsibility and ownership across the network.

Fraud becomes more difficult, data availability improves, and censorship resistance increases. These characteristics attracted early adopters in finance, although interest soon expanded into supply chain management, healthcare, digital identity systems and electronic commerce.

The transparency, traceability and programmability of blockchain technology introduced new possibilities for verifying transactions, enforcing rules, and reducing dependencies on intermediaries. These properties made it appealing for online markets that require trust between large numbers of strangers.

Overview of major global e-commerce platforms

An e-commerce platform is a digital environment that enables businesses and individuals to buy and sell goods or services online. It provides essential functions such as product listings, payment processing, inventory management, customer support and logistics integration.

Instead of handling each function independently, sellers rely on the platform’s infrastructure to reach customers, manage transactions and ensure secure and reliable delivery.

E-commerce platforms have evolved rapidly over the last two decades and now operate as global digital ecosystems. Companies such as Amazon, Alibaba, eBay, Shopify, and Mercado Libre dominate much of the global market.

shopper using computer laptop input order with trolley credit card delivery truck online shopping ecommerce technology concept

Each platform has built its success on efficient logistics, secure payment systems, powerful search technologies, recommendation algorithms and extensive third-party seller networks. Yet each platform depends on centralised data systems that assign authority to the platform operator.

Amazon functions as an all-in-one marketplace, logistics provider, and cloud infrastructure supplier. Sellers rely on Amazon for product storage, fulfilment, payments, advertising and customer trust.

The centralised structure enables Amazon to deliver high service reliability and instant refunds, while granting Amazon significant control over pricing, competition and data.

Alibaba operates a two-tiered system with Alibaba.com serving business-to-business (B2B) trade and AliExpress catering to international consumers. Its platforms rely on Alipay for secure transactions and on vast networks of Chinese suppliers.

Alibaba uses an AI-driven tool to manage inventory, fraud detection and personalised recommendations. The centralised model allows for strong coordination across sellers and logistics partners, although concerns often arise around counterfeits and data visibility.

eBay uses an auction and fixed-price model that supports both personal resales and professional merchants. It depends heavily on reputation systems and buyer protection schemes.

Dispute resolution and payment management were traditionally run through PayPal, later reintegrated into eBay’s own system. Although decentralised in terms of sellers, eBay remains centralised in its enforcement and decision-making.

Shopify functions as an infrastructure provider rather than a marketplace. Merchants build their own shops using Shopify’s tools, integrate third-party apps and manage independent payment gateways through Shopify Payments.

Although more decentralised on the surface, Shopify still holds the core infrastructure and retains ultimate authority over store policies.

Across all major e-commerce platforms, centralisation creates efficiency, but it also produces trust bottlenecks. Buyers depend on the platform operator to verify sellers, protect funds and manage refunds. Sellers depend on the operator for traffic, transaction processing and dispute management.

Power inequalities emerge because the platform controls data flows and marketplace rules. That environment encourages exploration of blockchain-based alternatives that seek to distribute trust, reduce intermediaries and automate verification.

How blockchain technology intersects with e-commerce

The relationship between blockchain technology and e-commerce can be divided into several major areas that reflect attempts to solve persistent problems within online marketplaces. Each area demonstrates how decentralised technology is reshaping trust and coordination instead of relying on central authorities.

Let’s dive into some examples.

Payments and digital currencies

The earliest impact arose from blockchain-based digital currencies. Platforms such as Overstock and Shopify began accepting Bitcoin and other cryptocurrencies as alternative payment methods.

bitcoin keyboard

Acceptance was driven by lower transaction fees compared to credit card networks, the elimination of chargebacks and faster cross-border payments. Buyers gained autonomy by being able to transact without banks, while sellers reduced exposure to fraudulent chargebacks.

Stablecoins further extended the utility of blockchain payments by reducing volatility through pegs to traditional currencies. Platforms started experimenting with stablecoin settlements that allow rapid international payments without the delays or costs of traditional banking infrastructure.

For cross-border commerce, stablecoins offer a major advantage because buyers and sellers located in different financial systems can transact directly.

While integration remains limited across mainstream platforms, blockchain wallets and cryptocurrency gateways illustrate how decentralised finance can complement e-commerce rather than replacing it.

Major challenges include regulatory uncertainty, fluctuating exchange rates, tax complexity and limited consumer familiarity.

Supply chain transparency and product authenticity

Blockchain technology provides auditable and immutable records that improve supply chain transparency. Companies such as Walmart, Carrefour and Alibaba have introduced blockchain-based tracking systems to verify product origins.

For high-value items including luxury goods, pharmaceuticals or speciality foods, authenticity is critical. A blockchain tracker records each stage of production and logistics from raw materials to retail delivery. Consumers can verify product history by scanning a QR code that accesses the ledger.

E-commerce platforms benefit because trust increases. Sellers find it easier to demonstrate the legitimacy of products, and counterfeit goods become easier to identify. Instead of depending solely on platform reputation systems, transparency is shifted to verifiable data that cannot be easily altered.

E-commerce, therefore, gains an additional trust layer through blockchain-backed provenance.

Decentralised marketplaces

A newer development involves decentralised e-commerce marketplaces built directly on blockchain networks. Platforms such as OpenBazaar, Origin Protocol, Boson Protocol and various Web3 retail experiments allow for peer-to-peer trade without central operators.

Smart contracts automate escrow, dispute handling, and payments. Buyers acquire goods by locking funds in a smart contract, sellers ship items and final confirmation releases payment.

The model reduces fees because no central operator takes commissions. Governance becomes community-driven through token-based voting. Control over seller data, reputation, and transactions is shared across the network instead of being held by a corporation.

Although adoption remains small compared to conventional platforms, decentralised marketplaces demonstrate how blockchain could transform current power structures in e-commerce.

Significant obstacles remain. Users must manage digital wallets, transaction costs fluctuate with network activity, and the user experience often feels less polished than that of mainstream platforms.

sending money paying online online shopping buying online online banking digital wallet mobile

Without strong brand recognition, trust formation is slower. Nevertheless, the model indicates how blockchain could enable marketplaces that operate without dominant intermediaries.

Smart contracts and automated commerce

Smart contracts provide automated enforcement of agreements. Within e-commerce, they can manage warranties, subscriptions, service renewals, loyalty rewards and escrow arrangements.

Instead of relying on human moderators, refund conditions or service obligations can be encoded into smart contracts that release payment only when the conditions are met.

Automated commerce extends further when smart contracts interact with Internet of Things devices. A connected device could autonomously purchase replacement parts or consumables when necessary.

E-commerce platforms could integrate smart contract logic to handle inventory restocking, supplier payments or automated compliance checks.

The special nature of smart contracts improves reliability because actions cannot be arbitrarily reversed by a platform operator. However, coding errors and rigidity create risks because smart contracts cannot easily adapt once deployed.

Governance frameworks such as decentralised autonomous organisations attempt to manage contract upgrades and dispute processes, although they remain experimental.

Tokenisation and loyalty systems

Blockchain technology also enables the tokenisation of loyalty points, vouchers and digital assets. Instead of centralised reward programmes that limit transferability, tokenised loyalty points can be traded, exchanged or used across multiple platforms.

Sellers gain marketing flexibility while buyers gain value portability.

E-commerce platforms have explored non-fungible tokens (NFTs) as digital certificates for physical goods, especially within luxury fashion, collectables and art-related markets. Instead of simple receipts, NFTs act as verifiable proof of ownership that can be transferred independently of the platform.

5409268

Although the market has experienced volatility, the experiment highlighted how blockchain can merge physical and digital commerce.

Data ownership and privacy

Centralised e-commerce collects extensive customer data, including purchasing behaviour, preferences and browsing patterns. Blockchain technology introduces alternative models where users hold their own data and selectively grant access through cryptographic permissions.

Instead of businesses accumulating large datasets, consumers become the custodians of their personal information.

Self-sovereign identity solutions allow users to verify age, location or reputation without exposing full personal profiles. This approach could reduce data breaches and strengthen privacy protection.

E-commerce platforms could integrate verification without storing sensitive information. Adoption remains limited, although interest is growing as data protection regulations increase.

Assessment of combined impact

The combination of blockchain technology and e-commerce represents a gradual shift toward decentralised trust models. Traditional platforms depend on central authorities to enforce rules, settle disputes, and secure transactions.

Blockchain introduces alternatives that distribute these responsibilities across networks and algorithms. The synergy creates several potential impacts.

Traceability and transparency improve product trust. Automated contracts reduce operational complexity. Decentralised payments shorten cross-border settlement times. Tokenisation creates new commercial models where digital and physical goods are tied to verifiable ownership.

Data ownership frameworks give buyers greater control over information. Taken together, these features increase resilience and reduce reliance on single intermediaries.

However, integration also encounters notable challenges. User experience remains a critical barrier because decentralised systems often require technical understanding. Regulatory frameworks for cryptocurrency payments, smart contract disputes and decentralised marketplace governance remain uncertain.

Crypto jurisdiction

Energy consumption concerns affect public perception, although newer blockchains use far more efficient consensus mechanisms. Large platforms may resist decentralisation because it reduces their control and revenue streams.

The most realistic pathway is hybrid rather than fully decentralised commerce. Mainstream marketplaces can incorporate blockchain features such as supply chain tracking, tokenised loyalty, and optional crypto payments while retaining central management for dispute resolution and customer support.

A combination like this delivers benefits without sacrificing the convenience of familiar interfaces.

Future outlook and complementary technologies

Blockchain technology will continue to shape e-commerce, although it will evolve alongside other technologies rather than acting alone. Several developments appear likely to influence the next decade of online commerce.

AI will integrate with blockchain to enhance fraud detection, automate dispute processes, and analyse supply chain data. Instead of opaque AI systems, blockchain can record decision rules or training data in transparent ways that improve accountability.

Internet of Things networks will use blockchain for device-to-device payments and micro-transactions. Connected appliances could automatically reorder supplies or arrange maintenance using autonomous smart contracts. A model that expands e-commerce from human-initiated purchases to machine-driven commerce.

Decentralised identity solutions will simplify verification for both buyers and sellers. Instead of uploading documents to multiple platforms, individuals will maintain portable digital identities controlled by cryptographic keys.

E-commerce platforms will verify the necessary attributes without storing personal information. Such an approach aligns with privacy regulations and reduces fraud.

Quantum-resistant cryptography will become essential as quantum computing advances. Blockchain networks will need upgrades to maintain security. E-commerce platforms built on blockchain will therefore rely on next-generation cryptographic systems.

AR and VR will integrate with blockchain through tokenised digital goods that move between immersive environments and real-world marketplaces.

medium shot man wearing vr glasses

Luxury brands already experiment with digital twins of physical products. That trend will only deepen as consumers spend more time in virtual spaces.

The future of e-commerce will not depend on a single technology. Instead of blockchain replacing conventional systems, it will act as a foundational layer that strengthens transparency, trust, and automation.

E-commerce platforms will selectively adopt decentralised features that complement their existing operations while retaining user-friendly interfaces and established logistics networks.

In conclusion, blockchain has reshaped expectations of trust within digital environments. Its decentralised architecture, immutability, and programmability have introduced new opportunities for secure payments, supply chain verification, automated agreements and data sovereignty.

E-commerce platforms recognised the potential and began integrating blockchain features to improve authenticity, reduce fraud and expand payment options. The combination offers a powerful pathway toward more transparent and efficient commerce.

Yet challenges remain as user experience, regulation and scalability continue to influence adoption. The future of our transactions is to be hybrid, with blockchain supporting specific components of e-commerce rather than replacing established models.

Complementary technologies, including AI, IoT, decentralised identity and quantum-resistant security, will reinforce these developments. E-commerce will evolve toward ecosystems where automation, transparency and user empowerment become standard expectations.

Blockchain technology will play a central role in that transformation, although its greatest impact will emerge through careful integration rather than radical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The fundamentals of AI

AI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise faces to virtual assistants that understand speech and recommendation engines that predict what we want to watch next, AI has become embedded in everyday life.

Behind this transformation lies a set of core principles, or the fundamentals of AI, which explain how machines learn, adapt, and perform tasks once considered the exclusive domain of humans.

At the heart of modern AI are neural networks, mathematical structures inspired by the human brain. They organise computation into layers of interconnected nodes, or artificial neurones, which process information and learn from examples.

Unlike traditional programming, where every rule must be explicitly defined, neural networks can identify patterns in data autonomously. The ability to learn and improve with experience underpins the astonishing capabilities of today’s AI.

Multi-layer perceptron networks

A neural network consists of multiple layers of interconnected neurons, not just a simple input and output layer. Each layer processes the data it receives from the previous layer, gradually building hierarchical representations.

In image recognition, early layers detect simple features, such as edges or textures, middle layers combine these into shapes, and later layers identify full objects, like faces or cars. In natural language processing, lower layers capture letters or words, while higher layers recognise grammar, context, and meaning.

Without multiple layers, the network would be shallow, limited in its ability to learn, and unable to handle complex tasks. Multi-layer, or deep networks, are what enable AI to perform sophisticated functions like autonomous driving, medical diagnosis, and language translation.

How mathematics drives artificial intelligence

 Blackboard, Text, Document, Mathematical Equation

The foundation of AI is mathematics. Without linear algebra, calculus, probability, and optimisation, modern AI systems would not exist. These disciplines allow machines to represent, manipulate, and learn from vast quantities of data.

Linear algebra allows inputs and outputs to be represented as vectors and matrices. Each layer of a neural network transforms these data structures, performing calculations that detect patterns in data, such as shapes in images or relationships between words in a sentence.

Calculus, especially the study of derivatives, is used to measure how small changes in a network’s parameters, called weights, affect its predictions. This information is critical for optimisation, which is the process of adjusting these weights to improve the network’s accuracy.

The loss function measures the difference between the network’s prediction and the actual outcome. It essentially tells the network how wrong it is. For example, the mean squared error measures the average squared difference between the predicted and actual values, while cross-entropy is used in classification tasks to measure how well the predicted probabilities match the correct categories.

Gradient descent is an algorithm that uses the derivative of the loss function to determine the direction and magnitude of changes to each weight. By moving weights gradually in the direction that reduces the loss, the network learns over time to make more accurate predictions.

Backpropagation is a method that makes learning in multi-layer neural networks feasible. Before its introduction in the 1980s, training networks with more than one or two layers was extremely difficult, as it was hard to determine how errors in the output layer should influence the earlier weights. Backpropagation systematically propagates this error information backwards through the network.

At its core, it applies the chain rule of calculus to compute gradients, indicating how much each weight contributes to the overall error and the direction it should be adjusted. Combined with gradient descent, this iterative process allows networks to learn hierarchical patterns, from simple edges in images to complex objects, or from letters to complete sentences.

Backpropagation has transformed neural networks from shallow, limited models into deep, powerful tools capable of learning sophisticated patterns and making human-like predictions.

Why neural network architecture matters

 Lighting, Light, Network

The arrangement of layers in a network, or its architecture, determines its ability to solve specific problems.

Activation functions introduce non-linearity, giving networks the ability to map complex, high-dimensional data. ReLU (Rectified Linear Unit), one of the most widely used activation functions, addresses critical training issues and enables deep networks to learn efficiently.

Convolutional neural networks (CNNs) excel in image and video analysis. By applying filters across images, CNNs detect local patterns like edges and textures. Pooling layers reduce spatial dimensions, making computation faster while preserving essential features. Local connectivity ensures neurones process only relevant input regions, mimicking human vision.

Recurrent neural networks (RNNs) and their variants, such as LSTMs and GRUs, process sequential data like text or audio. They maintain a hidden state that acts as memory, capturing dependencies over time, a crucial feature for tasks such as speech recognition or predictive text.

Transformer revolution and attention mechanisms

In 2017, AI research took a major leap with the introduction of Transformer models. Unlike RNNs, which process sequences step by step, transformers use attention mechanisms to evaluate all parts of the input simultaneously.

The attention mechanism calculates which elements in a sequence are most relevant to each output. Using linear algebra, it compares query, key, and value vectors to assign weights, highlighting important information and suppressing irrelevant details.

That approach enabled the creation of large language models (LLMs) such as GPT and BERT, capable of generating coherent text, answering questions, and translating languages with unprecedented accuracy.

Transformers reshaped natural language processing and have since expanded into areas such as computer vision, multimodal AI, and reinforcement learning. Their ability to capture long-range context efficiently illustrates the power of combining deep learning fundamentals with innovative architectures.

How does AI learn and generalise?

 Adult, Female, Person, Woman, Face, Head

One of the central challenges in AI is ensuring that networks learn meaningful patterns from data rather than simply memorising individual examples. The ability to generalise and apply knowledge learnt from one dataset to new, unseen situations is what allows AI to function reliably in the real world.

Supervised learning is the most widely used approach, where networks are trained on labelled datasets, with each input paired with a known output. The model learns to map inputs to outputs by minimising the difference between its predictions and the actual results.

Applications include image classification, where the system distinguishes cats from dogs, or speech recognition, where spoken words are mapped to text. The accuracy of supervised learning depends heavily on the quality and quantity of labelled data, making data curation critical for reliable performance.

Unsupervised learning, by contrast, works with unlabelled data and seeks to uncover hidden structures and patterns. Clustering algorithms, for instance, can group similar customer profiles in marketing, while dimensionality reduction techniques simplify complex datasets for analysis.

The paradigm enables organisations to detect anomalies, segment populations, and make informed decisions from raw data without explicit guidance.

Reinforcement learning allows machines to learn by interacting with an environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, the system is not told the correct action in advance; it discovers optimal strategies through trial and error.

That approach powers innovations in robotics, autonomous vehicles, and game-playing AI, enabling systems to learn long-term strategies rather than memorise specific moves.

A persistent challenge across all learning paradigms is overfitting, which occurs when a network performs exceptionally well on training data but fails to generalise to new examples. Techniques such as dropout, which temporarily deactivate random neurons during training, encourage the network to develop robust, redundant representations.

Similarly, weight decay penalises excessively large parameter values, preventing the model from relying too heavily on specific features. Achieving proper generalisation is crucial for real-world applications: self-driving cars must correctly interpret new road conditions, and medical AI systems must accurately assess patients with cases differing from the training dataset.

By learning patterns rather than memorising data, AI systems become adaptable, reliable, and capable of making informed decisions in dynamic environments.

The black box problem and explainable AI (XAI)

 Animal, Nature, Outdoors, Reef, Sea, Sea Life, Water, Pattern, Coral Reef

Deep learning and other advanced AI technologies rely on multi-layer neural networks that can process vast amounts of data. While these networks achieve remarkable accuracy in image recognition, language translation, and decision-making, their complexity often makes it extremely difficult to explain why a particular prediction was made. That phenomenon is known as the black box problem.

Though these systems are built on rigorous mathematical principles, the interactions between millions or billions of parameters create outputs that are not immediately interpretable. For instance, a healthcare AI might recommend a specific diagnosis, but without interpretability tools, doctors may not know what features influenced that decision.

Similarly, in finance or law, opaque models can inadvertently perpetuate biases or produce unfair outcomes.

Explainable AI (XAI) seeks to address this challenge. By combining the mathematical and structural fundamentals of AI with transparency techniques, XAI allows users to trace predictions back to input features, assess confidence, and identify potential errors or biases.

In practice, this means doctors can verify AI-assisted diagnoses, financial institutions can audit credit decisions, and policymakers can ensure fair and accountable deployment of AI.

Understanding the black box problem is therefore essential not only for developers but for society at large. It bridges the gap between cutting-edge AI capabilities and trustworthy, responsible applications, ensuring that as AI systems become more sophisticated, they remain interpretable, safe, and beneficial.

Data and computational power

 Electronics, Hardware, Computer, Server, Architecture, Building, Computer Hardware, Monitor, Screen

Modern AI depends on two critical ingredients: large, high-quality datasets and powerful computational resources. Data provides the raw material for learning, allowing networks to identify patterns and generalise to new situations.

Image recognition systems, for example, require millions of annotated photographs to reliably distinguish objects, while language models like GPT are trained on billions of words from books, articles, and web content, enabling them to generate coherent, contextually aware text.

High-performance computation is equally essential. Training deep neural networks involves performing trillions of calculations, a task far beyond the capacity of conventional processors.

Graphics Processing Units (GPUs) and specialised AI accelerators enable parallel processing, reducing training times from months to days or even hours. This computational power enables real-time applications, such as self-driving cars interpreting sensor data instantly, recommendation engines adjusting content dynamically, and medical AI systems analysing thousands of scans within moments.

The combination of abundant data and fast computation also brings practical challenges. Collecting representative datasets requires significant effort and careful curation to avoid bias, while training large models consumes substantial energy.

Researchers are exploring more efficient architectures and optimisation techniques to reduce environmental impact without sacrificing performance.

The future of AI

 Body Part, Finger, Hand, Person, Clothing, Glove, Electronics, Hardware

The foundations of AI continue to evolve rapidly, driven by advances in algorithms, data availability, and computational power. Researchers are exploring more efficient architectures, capable of learning from smaller datasets while maintaining high performance.

For instance, self-supervised learning allows a model to learn from unlabelled data by predicting missing information within the data itself, while few-shot learning enables a system to understand a new task from just a handful of examples. These methods reduce the need for enormous annotated datasets and make AI development faster and more resource-efficient.

Transformer models, powered by attention mechanisms, remain central to natural language processing. The attention mechanism allows the network to focus on the most relevant parts of the input when making predictions.

For example, when translating a sentence, it helps the model determine which words are most important for understanding the meaning. Transformers have enabled the creation of large language models like GPT and BERT, capable of summarising documents, answering questions, and generating coherent text.

Beyond language, multimodal AI systems are emerging, combining text, images, and audio to understand context across multiple sources. For instance, a medical AI system might analyse a patient’s scan while simultaneously reading their clinical notes, providing more accurate and context-aware insights.

Ethics, transparency, and accountability remain critical. Explainable AI (XAI) techniques help humans understand why a model made a particular decision, which is essential in fields like healthcare, finance, and law. Detecting bias, evaluating fairness, and ensuring that models behave responsibly are becoming standard parts of AI development.

Energy efficiency and sustainability are also priorities, as training large models consumes significant computational resources.

Ultimately, the future of AI will be shaped by models that are not only more capable but also more efficient, interpretable, and responsible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gaming and Esports: A new frontier in diplomacy

From playrooms to global arenas

Video games have long since outgrown their roots as niche entertainment. What used to be arcades and casual play is now a global cultural phenomenon.

A recent systematic review of research argues that video games play a powerful role in cultural transmission. They allow players worldwide, regardless of language or origin, to absorb cultural, social, and historical references embedded in game narratives.

Importantly, games are not passive media. Their interactivity gives them unique persuasive power. As one academic work on ‘gaming in diplomacy’ puts it, video games stand out among cultural media because they allow for procedural rhetoric, meaning that players learn values, norms, and worldviews not just by watching or hearing, but by actively engaging with them.

As such, gaming has the capacity to transcend borders, languages and traditional media’s constraints. For many young players around the world, including those in developing regions, gaming has become a shared language, a means to connecting across cultures, geographies, and generations.

Esports as soft power and public diplomacy

Nation branding, cultural export and global influence

Several countries have recognised the diplomatic potential of esports and gaming. Waseda University researchers emphasise that esports can be systematically used to project soft power, engaging foreign publics, shaping favourable perceptions, and building cultural influence, rather than being mere entertainment or economic ventures.

A 2025 study shows that the use of ‘game-based cultural diplomacy’ is increasingly common. Countries such as Japan, Poland, and China are utilising video games and associated media to promote their national identity, cultural narratives, and values.

An article about the games Honor of Kings and Black Myth: Wukong describes how the state-backed Chinese gaming industry incorporates traditional Chinese cultural elements (myth, history, aesthetics) into globally consumed games, thereby reaching millions internationally and strengthening China’s soft-power footprint.

For governments seeking to diversify their diplomatic tools beyond traditional media (film, music, diplomatic campaigns), esports offers persistent, globally accessible, and youth-oriented engagement, particularly as global demographics shift toward younger, digital-native generations.

Esports diplomacy in practice: People-to-people exchange

Cross-cultural understanding, community, identity

In bilateral diplomacy, esports has already been proposed as a vehicle for ‘people-to-people exchange.’ For example, a commentary on US–South Korea relations argues that annual esports competitions between the two countries’ top players could serve as a modern, interactive form of public diplomacy, fostering mutual cultural exchange beyond the formalities of traditional diplomacy.

On the grassroots level, esports communities, being global, multilingual and cross-cultural, foster friendships, shared experiences, and identities that transcend geography. That moment democratises participation, because you don’t need diplomatic credentials or state backing. All you need is access and engagement.

Some analyses emphasise how digital competition and community-building in esports ‘bridge cultural differences, foster international collaboration and cultural diversity through shared language and competition.’

Esport

From a theoretical perspective, applying frameworks from sports diplomacy to esports, supported by academic proposals, offers a path to sustainable and legitimate global engagement through gaming, if regulatory, equality and governance challenges are addressed.

Tensions & challenges: Not just a soft-power fairy tale

Risk of ‘techno-nationalism’ and propaganda

The use of video games in diplomacy is not purely benign. Some scholars warn of ‘digital nationalism’ or ‘techno-nationalism,’ where games become tools for propagating state narratives, shaping collective memory, and exporting political or ideological agendas.

The embedding of cultural or historical motifs in games (mythology, national heritage, symbols) can blur the line between cultural exchange and political messaging. While this can foster appreciation for a culture, it may also serve more strategic geopolitical or soft-power aims.

From a governance perspective, the rapid growth of esports raises legitimate concerns about inequality (access, digital divide), regulation, legitimacy of representation (who speaks for ‘a nation’), and possible exploitation of youth. Some academic literature argues that without proper regulation and institutionalisation, the ‘esports diplomacy gold rush’ risks being unsustainable.

Why this matters and what it means for Africa and the Global South

For regions such as Africa, gaming and esports represent not only recreation but potential platforms for youth empowerment, cultural expression, and international engagement. Even where traditional media or sports infrastructure may be limited, digital games can provide global reach and visibility. That aligns with the idea of ‘future pathways’ for youth, which includes creativity, community-building and cross-cultural exchange.

Because games can transcend language and geography, they offer a unique medium for diaspora communities, marginalised youth, and underrepresented cultures to project identity, share stories, and engage with global audiences. In that sense, gaming democratises cultural participation and soft-power capabilities.

On a geopolitical level, as game-based diplomacy becomes more recognised, Global South countries may leverage it to assert soft power, attract investment, and promote tourism or cultural heritage, provided they build local capacity (developers, esports infrastructure, regulation) and ensure inclusive access.

Gaming & esports as emerging diplomatic infrastructure

The trend suggests that video games and esports are steadily being institutionalised as instruments of digital diplomacy, soft power, and cultural diplomacy, not only by private companies, but increasingly by states and policymakers. Academic bibliometric analysis shows a growing number of studies (2015–2024) dedicated to ‘game-based cultural diplomacy.’

As esports ecosystems grow, with tournaments, global fans and the cultural export, we may see a shift from occasional ‘cultural-diplomacy events’ to sustained, long-term strategies employing gaming to shape international perceptions, build transnational communities, and influence foreign publics.

Gaming PC

However, for this potential to be realised responsibly, key challenges must be addressed. Those challenges include inequality of access (digital divide), transparency over cultural or political messaging, fair regulation, and safeguarding inclusivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

What the Cloudflare outage taught us: Tracing ones that shaped the internet of today

The internet has become part of almost everything we do. It helps us work, stay in touch with friends and family, buy things, plan trips, and handle tasks that would have felt impossible until recently. Most people cannot imagine getting through the day without it.

But there is a hidden cost to all this convenience. Most of the time, online services run smoothly, with countless systems working together in the background. But every now and then, though, a key cog slips out of place.

When that happens, the effects can spread fast, taking down apps, websites, and even entire industries within minutes. These moments remind us how much we rely on digital services, and how quickly everything can unravel when something goes wrong. It raises an uncomfortable question. Is digital dependence worth the convenience, or are we building a house of cards that could collapse, pulling us back into reality?

Warning shots of the dot-com Era and the infancy of Cloud services

In its early years, the internet saw several major malfunctions that disrupted key online services. Incidents like the Morris worm in 1988, which crashed about 10 percent of all internet-connected systems, and the 1996 AOL outage that left six million users offline, revealed how unprepared the early infrastructure was for growing digital demand.

A decade later, the weaknesses were still clear. In 2007, Skype, then with over 270 million users, went down for nearly two days after a surge in logins triggered by a Windows update overwhelmed its network. Since video calls were still in their early days, the impact was not as severe, and most users simply waited it out, postponing chats with friends and family until the issue was fixed.

As the dot-com era faded and the 2010s began, the shift to cloud computing introduced a new kind of fragility. When Amazon’s EC2 and EBS systems in the US-East region went down in 2011, the outage took down services like Reddit, Quora, and IMDb for days, exposing how quickly failures in shared infrastructure can cascade.

A year later, GoDaddy’s DNS failure took millions of websites offline, while large-scale Gmail disruptions affected users around the world, early signs that the cloud’s growing influence came with increasingly high stakes.

By the mid-2010s, it was clear that the internet had evolved from a patchwork of standalone services to a heavily interconnected ecosystem. When cloud or DNS providers stumbled, their failures rippled simultaneously across countless platforms. The move to centralised infrastructure made development faster and more accessible, but it also marked the beginning of an era where a single glitch could shake the entire web.

Centralised infrastructure and the age of cascading failures

The late 2000s and early 2010s saw a rapid rise in internet use, with nearly 2 billion people worldwide online. As access grew, more businesses moved into the digital space, offering e-commerce, social platforms, and new forms of online entertainment to a quickly expanding audience.

With so much activity shifting online, the foundation beneath these services became increasingly important, and increasingly centralised, setting the stage for outages that could ripple far beyond a single website or app.

The next major hit came in 2016, when a massive DDoS attack crippled major websites across the USA and Europe. Platforms like Netflix, Reddit, Twitter, and CNN were suddenly unreachable, not because they were directly targeted, but because Dyn, a major DNS provider, had been overwhelmed.

The attack used the Mirai botnet malware to hijack hundreds of thousands of insecure IoT devices and flood Dyn’s servers with traffic. It was one of the clearest demonstrations yet that knocking out a single infrastructure provider could take down major parts of the internet in one stroke.

In 2017, another major outage occurred, with Amazon at the centre once again. On 28 February, the company’s Simple Storage Service (S3) went down for about 4 hours, disrupting access across a large part of the US-EAST-1 region. While investigating a slowdown in the billing system, an Amazon engineer accidentally entered a typo in a command, taking more servers offline than intended.

That small error was enough to knock out services like Slack, Quora, Coursera, Expedia and countless other websites that relied on S3 for storage or media delivery. The financial impact was substantial; S&P 500 companies alone were estimated to have lost roughly 150 million dollars during the outage.

Amazon quickly published a clear explanation and apology, but transparency could not undo the economic damage nor (yet another) sudden reminder that a single mistake in a centralised system could ripple across the entire web.

Outages in the roaring 2020s

The S3 incident made one thing clear. Outages were no longer just about a single platform going dark. As more services leaned on shared infrastructure, even small missteps could take down enormous parts of the internet. And this fragility did not stop at cloud storage.

Over the next few years, attention shifted to another layer of the online ecosystem: content delivery networks and edge providers that most people had never heard of but that nearly every website depended on.

The 2020s opened with one of the most memorable outages to date. On 4 October 2021, Facebook and its sister platforms, Instagram, WhatsApp, and Messenger, vanished from the internet for nearly 7 hours after a faulty BGP configuration effectively removed the company’s services from the global routing table.

Millions of users flocked to other platforms to vent their frustration, overwhelming Twitter, Telegram, Discord, and Signal’s servers and causing performance issues across the board. It was a rare moment when a single company’s outage sent measurable shockwaves across the entire social media ecosystem.

But what happens when outages hit industries far more essential than social media? In 2023, the Federal Aviation Administration was forced to delay more than 10,000 flights, the first nationwide grounding of air traffic since the aftermath of September 11.

A corrupted database file brought the agency’s Notice to Air Missions (NOTAM) system to a standstill, leaving pilots without critical safety updates and forcing the entire aviation network to pause. The incident sent airline stocks dipping and dealt another blow to public confidence, showing just how disruptive a single technical failure can be when it strikes at the heart of critical infrastructure.

Outages that defined 2025

The year 2025 saw an unprecedented wave of outages, with server overloads, software glitches and coding errors disrupting services across the globe. The Microsoft 365 suite outage in January, the Southwest Airlines and FAA synchronisation failure in April, and the Meta messaging blackout in July all stood out for their scale and impact.

But the most disruptive failures were still to come. In October, Amazon Web Services suffered a major outage in its US-East-1 region, knocking out everything from social apps to banking services and reminding the world that a fault in a single cloud region can ripple across thousands of platforms.

Just weeks later, the Cloudflare November outage became the defining digital breakdown of the year. A logic bug inside its bot management system triggered a cascading collapse that took down social networks, AI tools, gaming platforms, transit systems and countless everyday websites in minutes. It was the clearest sign yet that when core infrastructure falters, the impact is immediate, global and largely unavoidable.

And yet, we continue to place more weight on these shared foundations, trusting they will hold because they usually do. Every outage, whether caused by a typo, a corrupted file, or a misconfigured update, exposes how quickly things can fall apart when one key piece gives way.

Going forward, resilience needs to matter as much as innovation. That means reducing single points of failure, improving transparency, and designing systems that can fail without dragging everything down. The more clearly we see the fragility of the digital ecosystem, the better equipped we are to strengthen it.

Outages will keep happening, and no amount of engineering can promise perfect uptime. But acknowledging the cracks is the first step toward reinforcing what we’ve built — and making sure the next slipped cog does not bring the whole machine to a stop.

The smoke and mirrors of the digital infrastructure

The internet is far from destined to collapse, but resilience can no longer be an afterthought. Redundancy, decentralisation and smarter oversight need to be part of the discussion, not just for engineers, but for policymakers as well.

Outages do not just interrupt our routines. They reveal the systems we have quietly built our lives around. Each failure shows how deeply intertwined our digital world has become, and how fast everything can stop when a single piece gives way.

Will we learn enough from each one to build a digital ecosystem that can absorb the next shock instead of amplifying it? Only time will tell.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The future of EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

As of the 19th of November, the Commission has published its digital omnibus proposal. Most of the amendments in the leaked draft have remained. One of the measures dropped is the definition of sensitive data. This means that inferences could amount to sensitive data.

However, the final document keeps three key changes that erode fundamental rights protections:

  • Changing the definition of personal data to be a subjective and narrow one;
  • An intertwining of the ePD and the GDPR which also allows for processing based on aggregated and security purposes;
  • LI being relied upon as a legal basis for AI processing of personal data.

Still, positive changes remain:

  • A single-entry point for EU data breaches. This is a welcomed measure which streamlines reporting and appease some compliance obligations for EU businesses.
  • Another welcomed measure is the white/black-list of processing activities which would or would not require a DPIA. The same note remains with what the language of this text will look like.

Overall, these two measures are examples of simplification measures with concrete benefits.

Now, the European Parliament has the task to dissect this proposal and debate on what to keep and what to reject. Some experts have suggested that this may take minimum 1 year to accomplish given how many changes there are, but this is not certain.

We can also expect a revised version of the Commission’s proposal to be published due to the errors in language, numbering and article referencing that have been observed. This does not mean any content changes.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!