Gaming and Esports: A new frontier in diplomacy

From playrooms to global arenas

Video games have long since outgrown their roots as niche entertainment. What used to be arcades and casual play is now a global cultural phenomenon.

A recent systematic review of research argues that video games play a powerful role in cultural transmission. They allow players worldwide, regardless of language or origin, to absorb cultural, social, and historical references embedded in game narratives.

Importantly, games are not passive media. Their interactivity gives them unique persuasive power. As one academic work on ‘gaming in diplomacy’ puts it, video games stand out among cultural media because they allow for procedural rhetoric, meaning that players learn values, norms, and worldviews not just by watching or hearing, but by actively engaging with them.

As such, gaming has the capacity to transcend borders, languages and traditional media’s constraints. For many young players around the world, including those in developing regions, gaming has become a shared language, a means to connecting across cultures, geographies, and generations.

Esports as soft power and public diplomacy

Nation branding, cultural export and global influence

Several countries have recognised the diplomatic potential of esports and gaming. Waseda University researchers emphasise that esports can be systematically used to project soft power, engaging foreign publics, shaping favourable perceptions, and building cultural influence, rather than being mere entertainment or economic ventures.

A 2025 study shows that the use of ‘game-based cultural diplomacy’ is increasingly common. Countries such as Japan, Poland, and China are utilising video games and associated media to promote their national identity, cultural narratives, and values.

An article about the games Honor of Kings and Black Myth: Wukong describes how the state-backed Chinese gaming industry incorporates traditional Chinese cultural elements (myth, history, aesthetics) into globally consumed games, thereby reaching millions internationally and strengthening China’s soft-power footprint.

For governments seeking to diversify their diplomatic tools beyond traditional media (film, music, diplomatic campaigns), esports offers persistent, globally accessible, and youth-oriented engagement, particularly as global demographics shift toward younger, digital-native generations.

Esports diplomacy in practice: People-to-people exchange

Cross-cultural understanding, community, identity

In bilateral diplomacy, esports has already been proposed as a vehicle for ‘people-to-people exchange.’ For example, a commentary on US–South Korea relations argues that annual esports competitions between the two countries’ top players could serve as a modern, interactive form of public diplomacy, fostering mutual cultural exchange beyond the formalities of traditional diplomacy.

On the grassroots level, esports communities, being global, multilingual and cross-cultural, foster friendships, shared experiences, and identities that transcend geography. That moment democratises participation, because you don’t need diplomatic credentials or state backing. All you need is access and engagement.

Some analyses emphasise how digital competition and community-building in esports ‘bridge cultural differences, foster international collaboration and cultural diversity through shared language and competition.’

Esport

From a theoretical perspective, applying frameworks from sports diplomacy to esports, supported by academic proposals, offers a path to sustainable and legitimate global engagement through gaming, if regulatory, equality and governance challenges are addressed.

Tensions & challenges: Not just a soft-power fairy tale

Risk of ‘techno-nationalism’ and propaganda

The use of video games in diplomacy is not purely benign. Some scholars warn of ‘digital nationalism’ or ‘techno-nationalism,’ where games become tools for propagating state narratives, shaping collective memory, and exporting political or ideological agendas.

The embedding of cultural or historical motifs in games (mythology, national heritage, symbols) can blur the line between cultural exchange and political messaging. While this can foster appreciation for a culture, it may also serve more strategic geopolitical or soft-power aims.

From a governance perspective, the rapid growth of esports raises legitimate concerns about inequality (access, digital divide), regulation, legitimacy of representation (who speaks for ‘a nation’), and possible exploitation of youth. Some academic literature argues that without proper regulation and institutionalisation, the ‘esports diplomacy gold rush’ risks being unsustainable.

Why this matters and what it means for Africa and the Global South

For regions such as Africa, gaming and esports represent not only recreation but potential platforms for youth empowerment, cultural expression, and international engagement. Even where traditional media or sports infrastructure may be limited, digital games can provide global reach and visibility. That aligns with the idea of ‘future pathways’ for youth, which includes creativity, community-building and cross-cultural exchange.

Because games can transcend language and geography, they offer a unique medium for diaspora communities, marginalised youth, and underrepresented cultures to project identity, share stories, and engage with global audiences. In that sense, gaming democratises cultural participation and soft-power capabilities.

On a geopolitical level, as game-based diplomacy becomes more recognised, Global South countries may leverage it to assert soft power, attract investment, and promote tourism or cultural heritage, provided they build local capacity (developers, esports infrastructure, regulation) and ensure inclusive access.

Gaming & esports as emerging diplomatic infrastructure

The trend suggests that video games and esports are steadily being institutionalised as instruments of digital diplomacy, soft power, and cultural diplomacy, not only by private companies, but increasingly by states and policymakers. Academic bibliometric analysis shows a growing number of studies (2015–2024) dedicated to ‘game-based cultural diplomacy.’

As esports ecosystems grow, with tournaments, global fans and the cultural export, we may see a shift from occasional ‘cultural-diplomacy events’ to sustained, long-term strategies employing gaming to shape international perceptions, build transnational communities, and influence foreign publics.

Gaming PC

However, for this potential to be realised responsibly, key challenges must be addressed. Those challenges include inequality of access (digital divide), transparency over cultural or political messaging, fair regulation, and safeguarding inclusivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

What the Cloudflare outage taught us: Tracing ones that shaped the internet of today

The internet has become part of almost everything we do. It helps us work, stay in touch with friends and family, buy things, plan trips, and handle tasks that would have felt impossible until recently. Most people cannot imagine getting through the day without it.

But there is a hidden cost to all this convenience. Most of the time, online services run smoothly, with countless systems working together in the background. But every now and then, though, a key cog slips out of place.

When that happens, the effects can spread fast, taking down apps, websites, and even entire industries within minutes. These moments remind us how much we rely on digital services, and how quickly everything can unravel when something goes wrong. It raises an uncomfortable question. Is digital dependence worth the convenience, or are we building a house of cards that could collapse, pulling us back into reality?

Warning shots of the dot-com Era and the infancy of Cloud services

In its early years, the internet saw several major malfunctions that disrupted key online services. Incidents like the Morris worm in 1988, which crashed about 10 percent of all internet-connected systems, and the 1996 AOL outage that left six million users offline, revealed how unprepared the early infrastructure was for growing digital demand.

A decade later, the weaknesses were still clear. In 2007, Skype, then with over 270 million users, went down for nearly two days after a surge in logins triggered by a Windows update overwhelmed its network. Since video calls were still in their early days, the impact was not as severe, and most users simply waited it out, postponing chats with friends and family until the issue was fixed.

As the dot-com era faded and the 2010s began, the shift to cloud computing introduced a new kind of fragility. When Amazon’s EC2 and EBS systems in the US-East region went down in 2011, the outage took down services like Reddit, Quora, and IMDb for days, exposing how quickly failures in shared infrastructure can cascade.

A year later, GoDaddy’s DNS failure took millions of websites offline, while large-scale Gmail disruptions affected users around the world, early signs that the cloud’s growing influence came with increasingly high stakes.

By the mid-2010s, it was clear that the internet had evolved from a patchwork of standalone services to a heavily interconnected ecosystem. When cloud or DNS providers stumbled, their failures rippled simultaneously across countless platforms. The move to centralised infrastructure made development faster and more accessible, but it also marked the beginning of an era where a single glitch could shake the entire web.

Centralised infrastructure and the age of cascading failures

The late 2000s and early 2010s saw a rapid rise in internet use, with nearly 2 billion people worldwide online. As access grew, more businesses moved into the digital space, offering e-commerce, social platforms, and new forms of online entertainment to a quickly expanding audience.

With so much activity shifting online, the foundation beneath these services became increasingly important, and increasingly centralised, setting the stage for outages that could ripple far beyond a single website or app.

The next major hit came in 2016, when a massive DDoS attack crippled major websites across the USA and Europe. Platforms like Netflix, Reddit, Twitter, and CNN were suddenly unreachable, not because they were directly targeted, but because Dyn, a major DNS provider, had been overwhelmed.

The attack used the Mirai botnet malware to hijack hundreds of thousands of insecure IoT devices and flood Dyn’s servers with traffic. It was one of the clearest demonstrations yet that knocking out a single infrastructure provider could take down major parts of the internet in one stroke.

In 2017, another major outage occurred, with Amazon at the centre once again. On 28 February, the company’s Simple Storage Service (S3) went down for about 4 hours, disrupting access across a large part of the US-EAST-1 region. While investigating a slowdown in the billing system, an Amazon engineer accidentally entered a typo in a command, taking more servers offline than intended.

That small error was enough to knock out services like Slack, Quora, Coursera, Expedia and countless other websites that relied on S3 for storage or media delivery. The financial impact was substantial; S&P 500 companies alone were estimated to have lost roughly 150 million dollars during the outage.

Amazon quickly published a clear explanation and apology, but transparency could not undo the economic damage nor (yet another) sudden reminder that a single mistake in a centralised system could ripple across the entire web.

Outages in the roaring 2020s

The S3 incident made one thing clear. Outages were no longer just about a single platform going dark. As more services leaned on shared infrastructure, even small missteps could take down enormous parts of the internet. And this fragility did not stop at cloud storage.

Over the next few years, attention shifted to another layer of the online ecosystem: content delivery networks and edge providers that most people had never heard of but that nearly every website depended on.

The 2020s opened with one of the most memorable outages to date. On 4 October 2021, Facebook and its sister platforms, Instagram, WhatsApp, and Messenger, vanished from the internet for nearly 7 hours after a faulty BGP configuration effectively removed the company’s services from the global routing table.

Millions of users flocked to other platforms to vent their frustration, overwhelming Twitter, Telegram, Discord, and Signal’s servers and causing performance issues across the board. It was a rare moment when a single company’s outage sent measurable shockwaves across the entire social media ecosystem.

But what happens when outages hit industries far more essential than social media? In 2023, the Federal Aviation Administration was forced to delay more than 10,000 flights, the first nationwide grounding of air traffic since the aftermath of September 11.

A corrupted database file brought the agency’s Notice to Air Missions (NOTAM) system to a standstill, leaving pilots without critical safety updates and forcing the entire aviation network to pause. The incident sent airline stocks dipping and dealt another blow to public confidence, showing just how disruptive a single technical failure can be when it strikes at the heart of critical infrastructure.

Outages that defined 2025

The year 2025 saw an unprecedented wave of outages, with server overloads, software glitches and coding errors disrupting services across the globe. The Microsoft 365 suite outage in January, the Southwest Airlines and FAA synchronisation failure in April, and the Meta messaging blackout in July all stood out for their scale and impact.

But the most disruptive failures were still to come. In October, Amazon Web Services suffered a major outage in its US-East-1 region, knocking out everything from social apps to banking services and reminding the world that a fault in a single cloud region can ripple across thousands of platforms.

Just weeks later, the Cloudflare November outage became the defining digital breakdown of the year. A logic bug inside its bot management system triggered a cascading collapse that took down social networks, AI tools, gaming platforms, transit systems and countless everyday websites in minutes. It was the clearest sign yet that when core infrastructure falters, the impact is immediate, global and largely unavoidable.

And yet, we continue to place more weight on these shared foundations, trusting they will hold because they usually do. Every outage, whether caused by a typo, a corrupted file, or a misconfigured update, exposes how quickly things can fall apart when one key piece gives way.

Going forward, resilience needs to matter as much as innovation. That means reducing single points of failure, improving transparency, and designing systems that can fail without dragging everything down. The more clearly we see the fragility of the digital ecosystem, the better equipped we are to strengthen it.

Outages will keep happening, and no amount of engineering can promise perfect uptime. But acknowledging the cracks is the first step toward reinforcing what we’ve built — and making sure the next slipped cog does not bring the whole machine to a stop.

The smoke and mirrors of the digital infrastructure

The internet is far from destined to collapse, but resilience can no longer be an afterthought. Redundancy, decentralisation and smarter oversight need to be part of the discussion, not just for engineers, but for policymakers as well.

Outages do not just interrupt our routines. They reveal the systems we have quietly built our lives around. Each failure shows how deeply intertwined our digital world has become, and how fast everything can stop when a single piece gives way.

Will we learn enough from each one to build a digital ecosystem that can absorb the next shock instead of amplifying it? Only time will tell.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The future of EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

As of the 19th of November, the Commission has published its digital omnibus proposal. Most of the amendments in the leaked draft have remained. One of the measures dropped is the definition of sensitive data. This means that inferences could amount to sensitive data.

However, the final document keeps three key changes that erode fundamental rights protections:

  • Changing the definition of personal data to be a subjective and narrow one;
  • An intertwining of the ePD and the GDPR which also allows for processing based on aggregated and security purposes;
  • LI being relied upon as a legal basis for AI processing of personal data.

Still, positive changes remain:

  • A single-entry point for EU data breaches. This is a welcomed measure which streamlines reporting and appease some compliance obligations for EU businesses.
  • Another welcomed measure is the white/black-list of processing activities which would or would not require a DPIA. The same note remains with what the language of this text will look like.

Overall, these two measures are examples of simplification measures with concrete benefits.

Now, the European Parliament has the task to dissect this proposal and debate on what to keep and what to reject. Some experts have suggested that this may take minimum 1 year to accomplish given how many changes there are, but this is not certain.

We can also expect a revised version of the Commission’s proposal to be published due to the errors in language, numbering and article referencing that have been observed. This does not mean any content changes.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most transformative decade begins as Kurzweil’s AI vision unfolds

AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translation and real-time voice synthesis to medical diagnostics and language generation, today’s systems perform tasks once reserved for human cognition. For those watching closely, this shift feels less like a surprise and more like a milestone reached.

Ray Kurzweil, one of the most prominent futurists of the past half-century, predicted much of what is now unfolding. In 1999, his book The Age of Spiritual Machines laid a roadmap for how computers would grow exponentially in power and eventually match and surpass human capabilities. Over two decades later, many of his projections for the 2020s have materialised with unsettling accuracy.

The futurist who measured the future

Kurzweil’s work stands out not only for its ambition but for its precision. Rather than offering vague speculation, he produced a set of quantifiable predictions, 147 in total, with a claimed accuracy rate of over 85 percent. These ranged from the growth of mobile computing and cloud-based storage to real-time language translation and the emergence of AI companions.

Since 2012, he has worked at Google as Director of Engineering, contributing to developing natural language understanding systems. He believes is that exponential growth in computing power, driven by Moore’s Law and its successors, will eventually transform our tools and biology.

Reprogramming the body with code

One of Kurzweil’s most controversial but recurring ideas is that human ageing is, at its core, a software problem. He believes that by the early 2030s, advancements in biotechnology and nanomedicine could allow us to repair or even reverse cellular damage.

The logic is straightforward: if ageing results from accumulated biological errors, then precise intervention at the molecular level might prevent those errors or correct them in real time.

AI adoption among US firms with over 250 employees fell to under 12 per cent in August, the largest drop since the Census Bureau began tracking in 2023.

Some of these ideas are already being tested, though results remain preliminary. For now, claims about extending life remain speculative, but the research trend is real.

Kurzweil’s perspective places biology and computation on a converging path. His view is not that we will become machines, but that we may learn to edit ourselves with the same logic we use to program them.

The brain, extended

Another key milestone in Kurzweil’s roadmap is merging biological and digital intelligence. He envisions a future where nanorobots circulate through the bloodstream and connect our neurons directly to cloud-based systems. In this vision, the brain becomes a hybrid processor, part organic, part synthetic.

By the mid-2030s, he predicts we may no longer rely solely on internal memory or individual thought. Instead, we may access external information, knowledge, and computation in real time. Some current projects, such as brain–computer interfaces and neuroprosthetics, point in this direction, but remain in early stages of development.

Kurzweil frames this not as a loss of humanity but as an expansion of its potential.

The singularity hypothesis

At the centre of Kurzweil’s long-term vision lies the idea of a technological singularity. By 2045, he believes AI will surpass the combined intelligence of all humans, leading to a phase shift in human evolution. However, this moment, often misunderstood, is not a single event but a threshold after which change accelerates beyond human comprehension.

Human like robot and artificial intelligence

The singularity, in Kurzweil’s view, does not erase humanity. Instead, it integrates us into a system where biology no longer limits intelligence. The implications are vast, from ethics and identity to access and inequality. Who participates in this future, and who is left out, remains an open question.

Between vision and verification

Critics often label Kurzweil’s forecasts as too optimistic or detached from scientific constraints. Some argue that while trends may be exponential, progress in medicine, cognition, and consciousness cannot be compressed into neat timelines. Others worry about the philosophical consequences of merging with machines.

Still, it is difficult to ignore the number of predictions that have already come true. Kurzweil’s strength lies not in certainty, but in pattern recognition. His work forces a reckoning with what might happen if the current pace of change continues unchecked.

Whether or not we reach the singularity by 2045, the present moment already feels like the future he described.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is the world ready for AI to rule justice?

AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As technology reshapes the way we work, communicate, and make decisions, its potential to transform legal processes is becoming increasingly difficult to ignore. The justice system, however, is one of the most ethically sensitive and morally demanding fields in existence. 

For AI to play a meaningful role in it, it must go beyond algorithms and data. It needs to understand the principles of fairness, context, and morality that guide every legal judgement. And perhaps more challengingly, it must do so within a system that has long been deeply traditional and conservative, one that values precedent and human reasoning above all else. Jet, from courts to prosecutors to lawyers, AI promises speed, efficiency, and smarter decision-making, but can it ever truly replace the human touch? 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI in courts: Smarter administration, not robot judges… yet

Courts across the world are drowning in paperwork, delays, and endless procedural tasks, challenges that are well within AI’s capacity to solve efficiently. From classifying cases and managing documentation to identifying urgent filings and analysing precedents, AI systems are beginning to serve as silent assistants within courtrooms. 

The German judiciary, for example, has already shown what this looks like in practice. AI tools such as OLGA and Frauke have helped categorise thousands of cases, extract key facts, and even draft standardised judgments in air passenger rights claims, cutting processing times by more than half. For a system long burdened by backlogs, such efficiency is revolutionary.

Still, the conversation goes far beyond convenience. Justice is not a production line; it is built on fairness, empathy, and the capacity to interpret human intent. Even the most advanced algorithm cannot grasp the nuance of remorse, the context of equality, or the moral complexity behind each ruling. The question is whether societies are ready to trust machine intelligence to participate in moral reasoning.

The final, almost utopian scenario would be a world where AI itself serves as a judge who is unbiased, tireless, and immune to human error or emotion. Yet even as this vision fascinates technologists, legal experts across Europe, including the EU Commission and the OECD, stress that such a future must remain purely theoretical. Human judges, they argue, must always stay at the heart of justice- AI may assist in the process, but it must never be the one to decide it. The idea is not to replace judges but to help them navigate the overwhelming sea of information that modern justice generates.

Courts may soon become smarter, but true justice still depends on something no algorithm can replicate: the human conscience. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for prosecutors: Investigating with superhuman efficiency

Prosecutors today are also sifting through thousands of documents, recordings, and messages for every major case. AI can act as a powerful investigative partner, highlighting connections, spotting anomalies, and bringing clarity to complex cases that would take humans weeks to unravel. 

Especially in criminal law, cases can involve terabytes of documents, evidence that humans can hardly process within tight legal deadlines or between hearings, yet must be reviewed thoroughly. AI tools can sift through this massive data, flag inconsistencies, detect hidden links between suspects, and reveal patterns that might otherwise remain buried. Subtle details that might escape the human eye can be detected by AI, making it an invaluable ally in uncovering the full picture of a case. By handling these tasks at superhuman speed, AI could also help accelerate the notoriously slow pace of legal proceedings, giving prosecutors more time to focus on strategy and courtroom preparation. 

More advanced systems are already being tested in Europe and the US, capable of generating detailed case summaries and predicting which evidence is most likely to hold up in court. Some experimental tools can even evaluate witness credibility based on linguistic cues and inconsistencies in testimony. In this sense, AI becomes a strategic partner, guiding prosecutors toward stronger, more coherent arguments. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for lawyers: Turning routine into opportunity

The adoption of AI and its capabilities might reach their maximum when it comes to the work of lawyers, where transforming information into insight and strategy is at the core of the profession. AI can take over repetitive tasks: reviewing contracts, drafting documents, or scanning case files, freeing lawyers to focus on the work that AI cannot replace, such as strategic thinking, creative problem-solving, and providing personalised client support. 

AI can be incredibly useful for analysing publicly available cases, helping lawyers see how similar situations have been handled, identify potential legal opportunities, and craft stronger, more informed arguments. By recognising patterns across multiple cases, it can suggest creative questions for witnesses and suspects, highlight gaps in the evidence, and even propose potential defence strategies. 

AI also transforms client communication. Chatbots and virtual assistants can manage routine queries, schedule meetings, and provide concise updates, giving lawyers more time to understand clients’ needs and build stronger relationships. By handling the mundane, AI allows lawyers to spend their energy on reasoning, negotiation, and advocacy.

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

Balancing promise with responsibility

AI is transforming the way courts, prosecutors, and lawyers operate, but its adoption is far from straightforward. While it can make work significantly easier, the technology also carries risks that legal professionals cannot ignore. Historical bias in data can shape AI outputs, potentially reinforcing unfair patterns if humans fail to oversee its use. Similarly, sensitive client information must be protected at all costs, making data privacy a non-negotiable responsibility. 

Training and education are therefore crucial. It is essential to understand not only what AI can do but also its limits- how to interpret suggestions, check for hidden biases, and decide when human judgement must prevail. Without this understanding, AI risks being a tool that misleads rather than empowers. 

The promise of AI lies in its ability to free humans from repetitive work, allowing professionals to focus on higher-value tasks. But its power is conditional: efficiency and insight mean little without the ethical compass of the human professionals guiding it.

Ultimately, the justice system is more than a process. It is about fairness, empathy, and moral reasoning. AI can assist, streamline, and illuminate, but the responsibility for decisions, for justice itself, remains squarely with humans. In the end, the true measure of AI’s success in law will be how it enhances human judgement, not how it replaces it.

So, is the world ready for AI to rule justice? The answer remains clear. While AI can transform how justice is delivered, the human mind, heart, and ethical responsibility must remain at the centre. AI may guide the way, but it cannot and should not hold the gavel.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The AI gold rush where the miners are broke

The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.

Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?

Where the AI money flows

Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.

The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.

The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.

OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.

Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.

Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.

AI makes money – it’s just not enough (yet)

Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.

No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.

To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.

OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.

Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.

NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.

Why AI projects crash and burn

Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.

GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.

Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.

The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.

Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.

All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.

How AI can become profitable

While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.

Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.

Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.

Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.

Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.

Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.

Charting the future of AI profitability

The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.

That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!