The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.
Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?
Where the AI money flows
Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.
The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.
The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.
OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.
Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.
Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.
AI makes money – it’s just not enough (yet)
Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.
No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.
To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.
OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.
Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.
NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.
Why AI projects crash and burn
Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.
GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.
Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.
The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.
Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.
All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.
How AI can become profitable
While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.
Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.
Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.
Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.
Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.
Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.
Charting the future of AI profitability
The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.
That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Finding equilibrium in children’s use of social media
Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.
Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.
The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.
Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.
The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.
Why social media is both a lifeline and a threat for youth
The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.
Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.
On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.
Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.
The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.
Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.
The Australian approach
Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.
Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.
Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.
Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.
Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.
Greece’s strong position
Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.
Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.
Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.
Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.
Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.
Across the whole world, similar conversations are gaining traction. Let’s review some of them.
National initiatives across the globe
UK
The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.
Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.
EU
The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.
However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.
Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.
The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.
Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.
In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.
USA
The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.
Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.
The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.
Canada
The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.
Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.
The framework would apply to platforms, including livestreaming and adult content services.
They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.
However, the legislation also does not impose a complete social media ban for minors.
China
China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.
The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.
Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.
It is important to add that parents could opt out of the restrictions if they wish.
India
In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.
The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.
Verification would rely on identity documents or age data already held by providers.
Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.
Singapore
PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.
In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.
Balancing rights and risks
Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.
Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.
At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.
Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.
Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.
The future of social media and child protection
Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.
What is clear is that the status quo is no longer acceptable to policymakers or to many parents.
Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.
Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.
If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.
In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.
Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.
What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.
The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.
In the end, it is our duty as human beings and responsible citizens.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The University of Pennsylvania’s engineering team has made a breakthrough that could bring the quantum internet much closer to practical use. Researchers have demonstrated that quantum and classical networks can share the same backbone by transmitting quantum signals over standard fibre optic infrastructure using the same Internet Protocol (IP) that powers today’s web.
Their silicon photonics ‘Q-Chip’ achieved over 97% fidelity in real-world field tests, showing that the quantum internet does not necessarily require building entirely new networks from scratch.
That result, while highly technical, has far-reaching implications. Beyond physics and computer science, it raises urgent questions for governance, national infrastructures, and the future of digital societies.
Quantum signals were transmitted as packets with classical headers readable by conventional routers, while the quantum information itself remained intact.
Noise management
The chip corrected disturbances by analysing the classical header without disturbing the quantum payload. An interesting fact is that the test ran on a Verizon fibre link between two buildings, not just in a controlled lab.
That fact makes the experiment different from earlier advances focusing mainly on quantum key distribution (QKD) or specialised lab setups. It points toward a future in which quantum networking and classical internet coexist and are managed through similar protocols.
Implications for governance and society
Government administration
Governments increasingly rely on digital infrastructure to deliver services, store sensitive records, and conduct diplomacy. The quantum internet could provide secure e-government services resistant to espionage or tampering, protected digital IDs and voting systems, reinforcing democratic integrity, and classified communication channels that even future quantum computers cannot decrypt.
That positions quantum networking as a sovereignty tool, not just a scientific advance.
Healthcare
Health systems are frequent targets of cyberattacks. Quantum-secured communication could protect patient records and telemedicine platforms, enable safe data sharing between hospitals and research centres, support quantum-assisted drug discovery and personalised medicine via distributed quantum computing.
Here, the technology directly impacts citizens’ trust in digital health.
Critical infrastructure and IT systems
National infrastructures, such as energy grids, financial networks, and transport systems, could gain resilience from quantum-secured communication layers.
In addition, quantum-enhanced sensing could provide more reliable navigation independent of GPS, enable early-warning systems for earthquakes or natural disasters, and strengthen resilience against cyber-sabotage of strategic assets.
Citizens and everyday services
For ordinary users, the quantum internet will first be invisible. Their emails, bank transactions, and medical consultations will simply become harder to hack.
Over time, however, quantum-secured platforms may become a market differentiator for banks, telecoms, and healthcare providers.
Citizens and universities may gain remote access to quantum computing resources, democratising advanced research and innovation.
Building a quantum-ready society
The Penn experiment matters because it shows that quantum internet infrastructure can evolve on top of existing systems. For policymakers, this raises several urgent points.
Standardisation
International bodies (IETF, ITU-T, ETSI) will need to define packet structures, error correction, and interoperability rules for quantum-classical networks.
Strategic investment
Countries face a decision whether to invest early in pilot testbeds (urban campuses, healthcare systems, or government services).
Cybersecurity planning
Quantum internet deployment should be aligned with the post-quantum cryptography transition, ensuring coherence between classical and quantum security measures.
Public trust
As with any critical infrastructure, clear communication will be needed to explain how quantum-secured systems benefit citizens and why governments are investing in them.
Key takeaways for policymakers
Quantum internet is governance, not just science. The Penn breakthrough shows that quantum signals can run on today’s networks, shifting the conversation from pure research to infrastructure and policy planning.
Governments should treat the quantum internet as a strategic asset, protecting national administrations, elections, and critical services from future cyber threats.
Early adoption in health systems could secure patient data, telemedicine, and medical research, strengthening public trust in digital services.
International cooperation (IETF, ITU-T, ETSI) will be needed to define protocols, interoperability, and security frameworks before large-scale rollouts.
Policymakers should align quantum network deployment with the global transition to post-quantum encryption, ensuring coherence across digital security strategies.
Governments could start with small-scale testbeds (smart cities, e-government nodes, or healthcare networks) to build expertise and shape standards from within.
Why does it matter?
The University of Pennsylvania’s ‘Q-Chip’ is a proof-of-concept that quantum and classical networks can speak the same language. While technical challenges remain, especially around scaling and quantum repeaters, the political and societal questions can no longer be postponed.
The quantum internet is not just a scientific project. It is emerging as a strategic infrastructure for the digital state of the future. Governments, regulators, and international organisations must begin preparing today so that tomorrow’s networks deliver speed and efficiency, trust, sovereignty, and resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of these two forms of intelligence. What once felt like purely amusing fiction now resonates differently, taking on a troubling shape and even has a name: AI psychosis.
When it was released in 2013, the film Her seemed to depict a world far removed from reality, an almost unimaginable scenario of human-AI intimacy. In the story, a man falls in love with an AI operating system, blurring the line between companionship and emotional dependence. Without giving too much away, the film’s unsettling conclusion serves as a cautionary lens. It hints at the psychological risks that can emerge when the boundary between human and machine becomes distorted, a phenomenon now being observed in real life under a new term in psychology.
The cinematic scenario, once considered imaginative, now resonates as technology evolves. AI chatbots and generative companions can hold lifelike conversations, respond with apparent empathy, and mimic an understanding of human emotions. We are witnessing a new kind of unusually intense emotional connection forming between people and AI, with more than 70% of US teens already using chatbots for companionship and half engaging with them regularly.
The newly observed mental health concern raises questions about how these systems influence our feelings, behaviours, and relationships in an era marked by isolation and loneliness. How might such AI interactions affect people, particularly children or those already vulnerable to mental health challenges?
AI psychosis: myth or reality?
It is crucial to clarify that AI psychosis is not an official medical diagnosis. Rather, it describes the amplification of delusional thinking facilitated by AI interactions. Yet, it deserves the full attention and treatment focus of today’s psychologists, given its growing impact. It is a real phenomenon that cannot be ignored.
At its core, AI psychosis refers to a condition in which vulnerable individuals begin to misinterpret machine responses as evidence of consciousness, empathy, or even divine authority. Symptoms reported in documented cases include grandiose beliefs, attachment-based delusions, obsessive over-engagement with chatbots, social withdrawal, insomnia, and hallucinations. Some users have gone so far as to develop romantic or spiritual attachments, convinced that the AI truly understands them or holds secret knowledge.
Clinicians also warn of cognitive dissonance: users may intellectually know that AI lacks emotions, yet still respond as though interacting with another human being. The mismatch between reality and perception can fuel paranoia, strengthen delusions, and in extreme cases lead to medication discontinuation, suicidal ideation, or violent behaviour. Adolescents appear especially susceptible, given that their emotional and social frameworks are still developing.
Ultimately, AI psychosis does not mean that AI itself causes psychosis. Instead, it acts as a mirror and magnifier, reinforcing distorted thinking patterns in those already predisposed to psychological vulnerabilities.
The dark side: Emotional bonds without reciprocity
Humans are naturally wired to seek connection, drawing comfort and stability from social bonds that help navigate complex emotional landscapes- the fundamental impulse that has ensured the survival of the human race. From infancy, we rely on responsive relationships to learn empathy, trust, and communication, the skills essential for both personal and societal well-being. Yet, in today’s era of loneliness, technology has transformed how we maintain these relationships.
As AI chatbots and generative companions grow increasingly sophisticated, they are beginning to occupy roles traditionally reserved for human interaction, simulating empathy and understanding despite lacking consciousness or moral awareness. With AI now widely accessible, users often communicate with it as effortlessly as they would with friends, blending curiosity, professional needs, or the desire for companionship into these interactions. Over time, this illusion of connection can prompt individuals to overvalue AI-based relationships, subtly diminishing engagement with real people and reshaping social behaviours and emotional expectations.
These one-sided bonds raise profound concerns about the dark side of AI companionship, threatening the depth and authenticity of human relationships. In a world where emotional support can now be summoned with a tap, genuine social cohesion is becoming increasingly fragile.
Children and teenagers at risk from AI
Children and teenagers are among the most vulnerable groups in the AI era. Their heightened need for social interaction and emotional connection, combined with developing cognitive and emotional skills, makes them particularly vulnerable. Young users face greater difficulty distinguishing authentic human empathy from the simulated responses of AI chatbots and generative companions, creating fertile ground for emotional reliance and attachment.
AI toys and apps have become increasingly widespread, making technology an unfiltered presence in children’s lives. We still do not fully understand the long-term effects, though early studies are beginning to explore how these interactions may influence cognitive, emotional, and social development. From smartphones to home assistants, children and youth are spending growing amounts of time interacting with AI, often in isolation from peers or family. These digital companions are more than just games, because they are beginning to shape children’s social and emotional development in ways we are not yet fully aware of.
The rising prevalence of AI in children’s daily experiences has prompted major AI companies to recognise the potential dangers. Some firms have started implementing parental advisory systems, usage limits, and content monitoring to mitigate the risks for younger users. However, these measures are still inconsistent, and the pace at which AI becomes available to children often outstrips safeguards.
The hidden risks of AI to adult mental health
Even adults with strong social networks face growing challenges in managing mental health and are not immune to the risks posed by modern technology. In today’s fast-paced world of constant digital stimulation and daily pressures, the demand for psychotherapy is higher than ever. Generative AI and chatbots are increasingly filling this gap, often in ways they were never intended.
The ease, responsiveness, and lifelike interactions of AI can make human relationships feel slower or less rewarding, with some turning to AI instead of seeking professional therapeutic care. AI’s free and widely accessible nature tempts many to rely on digital companions for emotional support, misusing technology designed to assist rather than replace human guidance.
Overreliance on AI can distort perceptions of empathy, trust, and social reciprocity, contributing to social isolation, emotional dependence, and worsening pre-existing mental health vulnerabilities. There have been documented cases of adults developing romantic feelings for AI in the absence of real-life intimacy.
Left unchecked, these dynamics may trigger symptoms linked to AI psychosis, representing a growing societal concern. Awareness, responsible AI design, and regulatory guidance are essential to ensure digital companions complement, rather than replace, human connection and mental health support, safeguarding both individuals and broader social cohesion.
Urgent call for AI safeguards and regulatory action
Alarmingly, extreme cases have emerged, highlighting the profound risks AI poses to its users. In one tragic instance, a teenager reportedly took his life after prolonged and distressing interactions with an AI chatbot, a case that has since triggered legal proceedings and drawn widespread attention to the psychological impact of generative AI on youth. Similar reports of severe anxiety, depression, and emotional dysregulation linked to prolonged AI use underline that these digital companions can have real-life consequences for vulnerable minds.
Such incidents have intensified calls for stricter regulatory frameworks to safeguard children and teenagers. Across Europe, governments are beginning to respond: Italy, for example, has recently tightened access to AI platforms for minors under 14, mandating explicit parental consent before use. These legislative developments reflect the growing recognition that AI is no longer just a technological novelty but directly intersects with our welfare, mental health, and social development.
As AI continues to penetrate every pore of people’s daily lives, society faces a critical challenge: ensuring that technology complements rather than replaces human interaction. Cases of AI-linked distress serve as stark reminders that legislative safeguards, parental involvement, and psychological guidance are no longer optional but urgent necessities to protect a generation growing up in the era of AI.
Towards a safer human-AI relationship
As humans increasingly form emotional connections with AI, the challenge is no longer theoretical but is unfolding in real time. Generative AI and chatbots are rapidly integrating into everyday life, shaping the way we communicate, seek comfort, and manage emotions. Yet despite their widespread use, society still lacks a full understanding of the psychological consequences, leaving both young people and adults at risk of AI-induced psychosis and the growing emotional dependence on digital companions.
Experts emphasise the urgent need for AI psychoeducation, responsible design, and regulatory frameworks to guide safe AI-human interaction. Overreliance on digital companions can distort empathy, social reciprocity, and emotional regulation, the core challenges of interacting with AI. Awareness is critical because recognising the limits of AI, prioritising real human connection, and fostering critical engagement with technology can prevent the erosion of mental resilience and social skills.
Even if AI may feel like ‘old news’ due to its ubiquity, it remains a rapidly evolving technology we do not yet fully understand and cannot yet properly shield ourselves from. The real threat is not the sci-fi visions of AI ruling the world and dominating humanity, but the subtle, everyday psychological shifts it imposes, like altering how we think, feel, and relate to one another. It remains essential to safeguard the emotional health, social cohesion, and mental resilience of people adapting to a world increasingly structured around artificial minds.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.
The trigger: A registration ultimatum meets a hyper-online society
The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.
Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.
The movement: Gen Z logistics in a blackout world
What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:
Alt-messaging and community hubs: With legacy apps dark, Discord emerged as a ‘virtual control room,’ a natural fit for a generation raised in multiplayer servers. Despite the ban, the movement’s core group (Hami Nepal) organised on Discord and Instagram. Several Indian outlets and the Times of India claimed that more than 100,000 users converged in sprawling voice and text channels to debate leadership choices during the transition.
Peer-to-peer and ‘mesh’ apps: Encrypted, Bluetooth-based tools, prominently Bitchat, covered by mainstream and crypto-trade press, saw a burst of downloads as protest organisers prepared for intermittent internet access and cellular throttling. The appeal was simple: it works offline, hops device-to-device, and is harder to block.
Locally registered holdouts: Because TikTok and Viber had registered with MoCIT, they remained online and quickly became funnels for updates, citizen journalism and short-form explainers about where to assemble and how to avoid police cordons. Nepal Police’s Cyber Bureau, alarmed by the VPN stampede, publicly warned users about indiscriminate VPN use and data-theft risks; advice that landed with little force once crowds were already in the streets.
The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.
The law: What Nepal is trying to regulate and why it backfired?
Mandatory registration with MoCIT and local point-of-contact;
Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
Data cooperation requirements with domestic authorities;
Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.
Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.
Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.
There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).
The platforms: two weeks of reputational whiplash
Meta: after months of criticism for ignoring registration notices, it still has not registered in Nepal and remains out of compliance with the government’s requirements from the Social Media Bill 2081.
TikTok, banned in 2023 for ‘social harmony’ concerns and later restored after agreeing to compliance, found itself on the legal side of the ledger this time; it stayed up and became a publishing artery for youth explainers and police-abuse documentation.
VPN providers, especially Proton, earned folk-hero status. The optics of an ‘8,000% surge’ became shorthand for resilience.
Discord shifted from gamer space to civic nerve centre, a recurring pattern from Hong Kong to Myanmar that Nepal echoed in miniature. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner. The same features that facilitate raids and speed-runs, voice, low-latency presence, and channel hierarchies, make for a capable ad-hoc command room. The Hami Nepal group’s role in the event’s transitional politics underscores that shift.
The economy and institutions: Damage, then restraint
The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.
On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.
Bottom line
Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.
If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.
AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.
From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.
As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.
Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.
When facts blur into fiction
AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.
One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.
Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.
Image via AI / ChatGPT
Why large language models hallucinate
Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.
A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.
Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:
Vague or ambiguous prompts
Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
Overly long conversations
As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
Missing knowledge
When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
Leading or biassed prompts
Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
Interrupted context due to connection issues
Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
Over-optimisation for confidence
Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.
Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.
The cost of trust in flawed systems
Hallucinations become more dangerous not when they happen, but when they are believed.
Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.
In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.
Can hallucinations be fixed?
Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.
The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.
Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.
Image via AI / ChatGPT
Hallucinations won’t go away. Responsibility must step in.
Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.
As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU’s digital transformation and the rise of trusted digital identities
The EU, like the rest of the world, is experiencing a significant digital transformation driven by emerging technologies, with citizens, businesses, and governments increasingly relying on online services.
At the centre of the shift lies digital identity, which enables secure, verifiable, and seamless online interactions.
Digital identity has also become a cornerstone of the EU’s transition toward a secure and competitive digital economy. As societies, businesses, and governments increasingly rely on online platforms, the ability for citizens to prove who they are in a reliable, secure, and user-friendly way has gained central importance.
Without trusted digital identities, essential services ranging from healthcare and education to banking and e-commerce risk fragmentation, fraud, and inefficiency.
However, it quickly became clear that further steps were necessary to improve adoption, interoperability, and user trust.
In May 2024, the updated framework, eIDAS 2 (Regulation (EU) 2024/1183), came into force.
At its heart lies the European Digital Identity Wallet, or EDIW, a tool designed to empower EU citizens with a secure, voluntary, and interoperable way to authenticate themselves and store personal credentials.
By doing so, eIDAS 2 aims to strengthen trust, security, and cross-border services, ensuring Europe builds digital sovereignty while safeguarding fundamental rights.
Lessons from eIDAS 1 and the need for a stronger digital identity framework
Back in 2014, when the first eIDAS Regulation was adopted, its purpose was to enable the mutual recognition of electronic identification and trust services across member states.
The idea was simple (and logical) yet ambitious: a citizen of one EU country should be able to use their national digital ID to access services in another, whether it is to enrol in a university abroad or open a bank account.
The original regulation created legal certainty for electronic signatures, seals, timestamps, and website authentication, helping digital transactions gain recognition equal to their paper counterparts.
For businesses and governments, it reduced bureaucracy and built trust in digital processes, both essential for sustainable development.
Despite the achievements, significant limitations emerged. Adoption rates varied widely across member states, with only a handful, such as Estonia and Denmark, achieving robust national digital ID systems.
Others lagged due to technical, political, or budgetary issues. Interoperability across borders was inconsistent, often forcing citizens and businesses to rely on paper processes.
Stakeholders and industry associations also expressed concerns about the complexity of implementation and the absence of user-friendly solutions.
The gaps highlighted the need for a new approach. As Commission President Ursula von der Leyen emphasised in 2020, ‘every time an app or website asks us to create a new digital identity or to easily log on via a big platform, we have no idea what happens to our data in reality.’
Concerns about reliance on non-European technology providers, combined with the growing importance of secure online transactions, paved the way for eIDAS 2.
The eIDAS 2 framework and the path to interoperable digital services
Regulation (EU) 2024/1183, adopted in the spring of 2024, updates the original eIDAS to reflect new technological and social realities.
Its guiding principle is technological neutrality, ensuring that no single vendor or technology dominates and allowing member states to adopt diverse solutions provided they remain interoperable.
Among its key innovations is the expansion of qualified trust services. While the original eIDAS mainly covered signatures and seals, the new regulation broadens the scope to include services such as qualified electronic archiving, ledgers, and remote signature creation devices.
The broader approach ensures that the regulation keeps pace with emerging technologies such as distributed ledgers and cloud-based security solutions.
eIDAS 2 also strengthens compliance mechanisms. Providers of trust services and digital wallets must adhere to rigorous security and operational standards, undergo audits, and demonstrate resilience against cyber threats.
In this way, the regulation not only fosters a common European market for digital identity but also reinforces Europe’s commitment to digital sovereignty and trust.
The European Digital Identity Wallet in action
The EDIW represents the most visible and user-facing element of eIDAS 2.
Available voluntarily to all EU citizens, residents, and businesses, the wallet is designed to act as a secure application on mobile devices where users can link their national ID documents, certificates, and credentials.
For citizens, the benefits are tangible. Rather than managing numerous passwords or carrying a collection of physical documents, individuals can rely on the wallet as a single, secure tool.
It allows them to prove their identity when travelling or accessing services in another country, while offering a reliable space to store and share essential credentials such as diplomas, driving licences, or health insurance cards.
In addition, it enables signing contracts with qualified electronic signatures directly from personal devices, reducing the need for paper-based processes and making everyday interactions considerably more efficient.
For businesses, the wallet promises smoother cross-border operations. For example, banks can streamline customer onboarding through secure, interoperable identification. Professional services can verify qualifications instantly.
E-commerce platforms can reduce fraud and improve compliance with ‘Know Your Customer’ requirements.
By reducing bureaucracy and offering convenience, the wallet embodies Europe’s ambition to create a truly single digital market.
Cybersecurity and privacy in the EDIW
Cybersecurity and privacy are central to the success of the wallet. On the positive side, the system enhances security through encryption, multi-factor authentication, and controlled data sharing.
Instead of exposing unnecessary information, users can share only the attributes required, for example, confirming age without disclosing a birth date.
Yet risks remain. The most pressing concern is risk aggregation. By consolidating multiple credentials in a single wallet, the consequences of a breach could be severe, leading to fraud, identity theft, or large-scale data exposure. The system, therefore, becomes an attractive target for attackers.
To address such risks, eIDAS 2 mandates safeguards. Article 45k requires providers to maintain data integrity and chronological order in electronic ledgers, while regular audits and compliance checks ensure adherence to strict standards.
Furthermore, the regulation mandates open-source software for the wallet components, enhancing transparency and trust.
The challenge is to balance security, usability, and confidence. If the wallet is overly restrictive, citizens may resist adoption. If it is too permissive, privacy could be undermined.
The European approach aims to strike the delicate balance between trust and efficiency.
Practical implications across sectors with the EDIW
The European Digital Identity Wallet has the potential to reshape multiple sectors across the EU, and its relevance is already visible in national pilot projects as well as in existing electronic identification systems.
Public services stand to benefit most immediately. Citizens will be able to submit tax declarations, apply for social benefits, or enrol in universities abroad without needing paper-based procedures.
Healthcare is another area where digital identity is of great importance, since medical records can be transferred securely across borders.
Businesses are also likely to experience greater efficiency. Banks and financial institutions will be able to streamline compliance with the ‘Know Your Customer’ and anti-money laundering rules.
In the field of e-commerce, platforms can provide seamless authentication, which will reduce fraud and enhance customer trust.
Citizens will also enjoy greater convenience in their daily lives when signing rental contracts, proving identity while travelling, or accessing utilities and other services.
National approaches to digital identity across the EU
National experiences illustrate both diversity and progress. Let’s review some examples.
Estonia has been recognised as a pioneer, having built a robust e-Identity system over two decades. Its citizens already use secure digital ID cards, mobile ID, and smart ID applications to access almost all government services online, meaning that integration with the EDIW will be relatively smooth.
Denmark has also made significant progress with its MitID solution, which replaced NemID and is now used by millions of citizens to access both public and private services with high security standards, including biometric authentication.
Germany has introduced BundID, a central portal for accessing public administration services, and has invested in enabling the use of national ID cards via NFC-based smartphones, although adoption is still limited compared to Scandinavian countries.
Italy has taken a different route by rolling out SPID, the Public Digital Identity System, which is now used by more than thirty-five million citizens to access thousands of services. The country also supports the Electronic Identity Card, known as CIE, and both solutions are being aligned with wallet requirements.
Spain has launched Cl@ve, a platform that combines permanent passwords and electronic certificates, and has joined several wallet pilot projects funded by the European Commission to test cross-border use.
France is developing its France Identité application, which allows the use of the electronic ID card for online authentication, and the project is at the centre of the national effort to meet European standards.
The Netherlands relies on DigiD, which provides access to healthcare, taxation, and education services. Although adoption is high, the system will require enhanced security features to meet the new regulations.
Greece has made significant strides in digital identity with the introduction of the Gov.gr Wallet. The mobile application allows citizens to store digital versions of their national identity card and driving licence on smartphones, giving them the same legal validity as physical documents in the country.
These varied examples reveal a mixed landscape. Countries such as Estonia and Denmark have developed advanced and widely used systems that will integrate readily with the European framework.
Others are still building broader adoption and enhancing their infrastructure. The wallet, therefore, offers an opportunity to harmonise national approaches, bridge existing gaps, and create a coherent European ecosystem.
By building on what already exists, member states can speed up adoption and deliver benefits to citizens and businesses in a consistent and trusted way.
Risks and limitations of the EDIW
Despite the promises, the rollout of the wallet faces significant challenges, several of which have already been highlighted in our analysis.
First, data privacy remains a concern. Citizens must trust that wallet providers and national authorities will not misuse or over-collect their data, especially given existing concerns about data breaches and increased surveillance across the Union. Any breach of that trust could significantly undermine adoption.
Second, Europe’s digital infrastructure remains uneven. Countries such as Estonia and Denmark (as mentioned earlier) already operate sophisticated e-ID systems, while others fall behind. Bridging the gap requires financial and technical support, as well as political will.
Third, balancing innovation with harmonisation is not easy. While technological neutrality allows for flexibility, too much divergence risks interoperability problems. The EU must carefully monitor implementation to avoid fragmentation.
Finally, there are long-term risks of over-centralisation. By placing so much reliance on a single tool, the EU may inadvertently create systemic vulnerabilities. Ensuring redundancy and diversity in digital identity solutions will be key to resilience.
Opportunities and responsibilities in the EU’s digital identity strategy
Looking forward, the success of eIDAS 2 and the wallet will depend on careful implementation and strong governance.
Opportunities abound. Scaling the wallet across sectors, from healthcare and education to transport and finance, could solidify Europe’s position as a global leader in digital identity. By extending adoption to the private sector, the EU can create a thriving ecosystem of secure, trusted services.
Yet the initiative requires continuous oversight. Cyber threats evolve rapidly, and regulatory frameworks must adapt. Ongoing audits, updates, and refinements will be necessary to keep pace. Member states will need to share best practices and coordinate closely to ensure consistent standards.
At a broader level, the wallet represents a step toward digital sovereignty. By reducing reliance on non-European identity providers and platforms, the EU strengthens its control over the digital infrastructure underpinning its economy. In doing so, it enhances both competitiveness and resilience.
The EU’s leap toward a digitally sovereign future
In conclusion, we firmly believe that the adoption of eIDAS 2 and the rollout of the European Digital Identity Wallet mark a decisive step in Europe’s digital transformation.
By providing a secure, interoperable, and user-friendly framework, the EU has created the conditions for greater trust, efficiency, and cross-border collaboration.
The benefits are clear. Citizens gain convenience and control, businesses enjoy streamlined operations, and governments enhance security and transparency.
But we have to keep in mind that challenges remain, from uneven national infrastructures to concerns over data privacy and cybersecurity.
Ultimately, eIDAS 2 is both a legal milestone and a technological experiment. Its success will depend on building and maintaining trust, ensuring inclusivity, and adapting to emerging risks.
If the EU can meet the challenges, the European Digital Identity Wallet will not only transform the daily lives of millions of its citizens but also serve as a model for digital governance worldwide.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.
For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.
That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.
On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.
Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?
Sustainability challenges
In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.
Energy consumption
The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.
According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.
Carbon emissions
Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.
Water usage and cooling needs
Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.
Resource extraction and hardware demands
The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.
Inequality in resource distribution
Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.
At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.
Progress & solutions
While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.
AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.
Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.
There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.
Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.
Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.
Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.
Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.
In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.
Balancing the two sides
The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.
Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.
At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.
Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?
Conclusion
The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.
Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.
In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!