Techno(demo)cracy in action: How a five-day app blackout lit a Gen Z online movement in Nepal

Over the past two weeks, Nepal’s government has sought the right decision to regulate online space. The brought decision prompted a large, youth-led response. A government’s order issued on 4 September blocked access to 26 social platforms, from Facebook, Instagram and YouTube to X and WhatsApp, after the companies failed to register locally under Nepal’s rules for digital services. Within the next five days, authorities lifted the ban, but it was too late: tens of thousands of mostly young Nepalis, organized with VPNs, alternative chat apps and gaming-era coordination tools, forced a political reckoning that culminated in the burning of parts of the parliament complex, the resignation of Prime Minister K.P. Sharma Oli on 9 September, and the appointment of former chief justice Sushila Karki to lead an interim administration.

The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.

The trigger: A registration ultimatum meets a hyper-online society

The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.

Against that backdrop, the cabinet and courts have put into practice the bill draft. On 28 August 2025, authorities gave major platforms seven days to register with the Ministry of Communication and Information Technology (MoCIT); on 4 September, the telecom regulator moved to block unregistered services. Nepal’s government listed the 26 services covered by the order (including Facebook, Instagram, X, WhatsApp, YouTube, Reddit, Snapchat and others), while TikTok, Viber, Witk, Nimbuzz and Popo Live had registered and were allowed to operate. Two more (Telegram and Global Diary) were under review.

Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.

The movement: Gen Z logistics in a blackout world

What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:

The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.

 Chart, Plot, Map, Atlas, Diagram

The law: What Nepal is trying to regulate and why it backfired?

The draft Social Media Bill 2081 and the 2023 Directive share a broad structure:

  • Mandatory registration with MoCIT and local point-of-contact;
  • Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
  • Data cooperation requirements with domestic authorities;
  • Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.

Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.

Regarding penalties, public discussion compared platform fines with user-level sanctions and general cybercrime provisions. Available news info suggests proposed platform-side fines up to roughly USD 17,000 (EUR 15,000) for operating without authorisation, while user-level offences (e.g. phishing, deepfakes, certain categories of misinformation) carry fines up to USD 2,000–3,500 and potential jail terms depending on the offence. 

The demographics: Who showed up, and why them?

Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.

There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).

The platforms: two weeks of reputational whiplash

 Person, Art, Graphics, Clothing, Footwear, Shoe, Book, Comics, Publication

The economy and institutions: Damage, then restraint

The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.

On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.

Bottom line

Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.

If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.

When language models fabricate truth: AI hallucinations and the limits of trust

AI has come far from rule-based systems and chatbots with preset answers. Large language models (LLMs), powered by vast amounts of data and statistical prediction, now generate text that can mirror human speech, mimic tone, and simulate expertise, but also produce convincing hallucinations that blur the line between fact and fiction.

From summarising policy to drafting contracts and responding to customer queries, these tools are becoming embedded across industries, governments, and education systems.

As their capabilities grow, so does the underlying problem that many still underestimate. These systems frequently produce convincing but entirely false information. Often referred to as ‘AI hallucinations‘, such factual distortions pose significant risks, especially when users trust outputs without questioning their validity.

Once deployed in high-stakes environments, from courts to political arenas, the line between generative power and generative failure becomes more challenging to detect and more dangerous to ignore.

When facts blur into fiction

AI hallucinations are not simply errors. They are confident statements presented as facts, even based on probability. Language models are designed to generate the most likely next word, not the correct one. That difference may be subtle in casual settings, but it becomes critical in fields like law, healthcare, or media.

One such example emerged when an AI chatbot misrepresented political programmes in the Netherlands, falsely attributing policy statements about Ukraine to the wrong party. However, this error spread misinformation and triggered official concern. The chatbot had no malicious intent, yet its hallucination shaped public discourse.

Mistakes like these often pass unnoticed because the tone feels authoritative. The model sounds right, and that is the danger.

When language models hallucinate, they sound credible, and users believe them. Discover why this is a growing risk.
Image via AI / ChatGPT

Why large language models hallucinate

Hallucinations are not bugs in the system. They are a direct consequence of the way how language models are built. Trained to complete text based on patterns, these systems have no fundamental understanding of the world, no memory of ‘truth’, and no internal model of fact.

A recent study reveals that even the way models are tested may contribute to hallucinations. Instead of rewarding caution or encouraging honesty, current evaluation frameworks favour responses that appear complete and confident, even when inaccurate. The more assertive the lie, the better it scores.

Alongside these structural flaws, real-world use cases reveal additional causes. Here are the most frequent causes of AI hallucinations:

  • Vague or ambiguous prompts
  • Lack of specificity forces the model to fill gaps with speculative content that may not be grounded in real facts.
  • Overly long conversations
  • As prompt history grows, especially without proper context management, models lose track and invent plausible answers.
  • Missing knowledge
  • When a model lacks reliable training data on a topic, it may produce content that appears accurate but is fabricated.
  • Leading or biassed prompts
  • Inputs that suggest a specific answer can nudge the model into confirming something untrue to match expectations.
  • Interrupted context due to connection issues
  • Especially with browser-based tools, a brief loss of session data can cause the model to generate off-track or contradictory outputs.
  • Over-optimisation for confidence
  • Most systems are trained to sound fluent and assertive. Saying ‘I don’t know’ is statistically rare unless explicitly prompted.

Each of these cases stems from a single truth. Language models are not fact-checkers. They are word predictors. And prediction, without verification, invites fabrication.

The cost of trust in flawed systems

Hallucinations become more dangerous not when they happen, but when they are believed.

Users may not question the output of an AI system if it appears polished, grammatically sound, and well-structured. This perceived credibility can lead to real consequences, including legal documents based on invented cases, medical advice referencing non-existent studies, or voters misled by political misinformation.

In low-stakes scenarios, hallucinations may lead to minor confusion. In high-stakes contexts, the same dynamic can result in public harm or institutional breakdown. Once generated, an AI hallucination can be amplified across platforms, indexed by search engines, and cited in real documents. At that point, it becomes a synthetic fact.

Can hallucinations be fixed?

Some efforts are underway to reduce hallucination rates. Retrieval-augmented generation (RAG), fine-tuning on verified datasets, and human-in-the-loop moderation can improve reliability. Still, no method has eliminated hallucinations.

The deeper issue is how language models are rewarded, trained, and deployed. Without institutional norms prioritising verifiability and technical mechanisms that can flag uncertainty, hallucinations will remain embedded in the system.

Even the most capable AI models must include humility. The ability to say ‘I don’t know’ is still one of the rarest responses in the current landscape.

How AI hallucinations mislead users and shape decisions
Image via AI / ChatGPT

Hallucinations won’t go away. Responsibility must step in.

Language models are not truth machines. They are prediction engines trained on vast and often messy human data. Their brilliance lies in fluency, but fluency can easily mask fabrication.

As AI tools become part of our legal, political, and civic infrastructure, institutions and users must approach them critically. Trust in AI should never be passive. And without active human oversight, hallucinations may not just mislead; they may define the outcome.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Unlocking the EU digital future with eIDAS 2 and digital wallets

The EU’s digital transformation and the rise of trusted digital identities

The EU, like the rest of the world, is experiencing a significant digital transformation driven by emerging technologies, with citizens, businesses, and governments increasingly relying on online services.

At the centre of the shift lies digital identity, which enables secure, verifiable, and seamless online interactions.

Digital identity has also become a cornerstone of the EU’s transition toward a secure and competitive digital economy. As societies, businesses, and governments increasingly rely on online platforms, the ability for citizens to prove who they are in a reliable, secure, and user-friendly way has gained central importance.

Without trusted digital identities, essential services ranging from healthcare and education to banking and e-commerce risk fragmentation, fraud, and inefficiency.

The EU has long recognised the challenge. The first introduction of the eIDAS Regulation, on Electronic Identification, Authentication and Trust Services, in 2014, was a milestone in creating a legal framework for electronic identification and trust services across its borders.

However, it quickly became clear that further steps were necessary to improve adoption, interoperability, and user trust.

In May 2024, the updated framework, eIDAS 2 (Regulation (EU) 2024/1183), came into force.

At its heart lies the European Digital Identity Wallet, or EDIW, a tool designed to empower EU citizens with a secure, voluntary, and interoperable way to authenticate themselves and store personal credentials.

EU security

By doing so, eIDAS 2 aims to strengthen trust, security, and cross-border services, ensuring Europe builds digital sovereignty while safeguarding fundamental rights.

Lessons from eIDAS 1 and the need for a stronger digital identity framework

Back in 2014, when the first eIDAS Regulation was adopted, its purpose was to enable the mutual recognition of electronic identification and trust services across member states.

The idea was simple (and logical) yet ambitious: a citizen of one EU country should be able to use their national digital ID to access services in another, whether it is to enrol in a university abroad or open a bank account.

The original regulation created legal certainty for electronic signatures, seals, timestamps, and website authentication, helping digital transactions gain recognition equal to their paper counterparts.

For businesses and governments, it reduced bureaucracy and built trust in digital processes, both essential for sustainable development.

Despite the achievements, significant limitations emerged. Adoption rates varied widely across member states, with only a handful, such as Estonia and Denmark, achieving robust national digital ID systems.

Others lagged due to technical, political, or budgetary issues. Interoperability across borders was inconsistent, often forcing citizens and businesses to rely on paper processes.

Stakeholders and industry associations also expressed concerns about the complexity of implementation and the absence of user-friendly solutions.

The gaps highlighted the need for a new approach. As Commission President Ursula von der Leyen emphasised in 2020, ‘every time an app or website asks us to create a new digital identity or to easily log on via a big platform, we have no idea what happens to our data in reality.’

Concerns about reliance on non-European technology providers, combined with the growing importance of secure online transactions, paved the way for eIDAS 2.

The eIDAS 2 framework and the path to interoperable digital services

Regulation (EU) 2024/1183, adopted in the spring of 2024, updates the original eIDAS to reflect new technological and social realities.

Its guiding principle is technological neutrality, ensuring that no single vendor or technology dominates and allowing member states to adopt diverse solutions provided they remain interoperable.

Among its key innovations is the expansion of qualified trust services. While the original eIDAS mainly covered signatures and seals, the new regulation broadens the scope to include services such as qualified electronic archiving, ledgers, and remote signature creation devices.

The broader approach ensures that the regulation keeps pace with emerging technologies such as distributed ledgers and cloud-based security solutions.

eIDAS 2 also strengthens compliance mechanisms. Providers of trust services and digital wallets must adhere to rigorous security and operational standards, undergo audits, and demonstrate resilience against cyber threats.

In this way, the regulation not only fosters a common European market for digital identity but also reinforces Europe’s commitment to digital sovereignty and trust.

EU European Commission Quantum tech Cybersecurity

The European Digital Identity Wallet in action

The EDIW represents the most visible and user-facing element of eIDAS 2.

Available voluntarily to all EU citizens, residents, and businesses, the wallet is designed to act as a secure application on mobile devices where users can link their national ID documents, certificates, and credentials.

For citizens, the benefits are tangible. Rather than managing numerous passwords or carrying a collection of physical documents, individuals can rely on the wallet as a single, secure tool.

It allows them to prove their identity when travelling or accessing services in another country, while offering a reliable space to store and share essential credentials such as diplomas, driving licences, or health insurance cards.

In addition, it enables signing contracts with qualified electronic signatures directly from personal devices, reducing the need for paper-based processes and making everyday interactions considerably more efficient.

For businesses, the wallet promises smoother cross-border operations. For example, banks can streamline customer onboarding through secure, interoperable identification. Professional services can verify qualifications instantly.

E-commerce platforms can reduce fraud and improve compliance with ‘Know Your Customer’ requirements.

By reducing bureaucracy and offering convenience, the wallet embodies Europe’s ambition to create a truly single digital market.

Cybersecurity and privacy in the EDIW

Cybersecurity and privacy are central to the success of the wallet. On the positive side, the system enhances security through encryption, multi-factor authentication, and controlled data sharing.

EU Cybersecurity

Instead of exposing unnecessary information, users can share only the attributes required, for example, confirming age without disclosing a birth date.

Yet risks remain. The most pressing concern is risk aggregation. By consolidating multiple credentials in a single wallet, the consequences of a breach could be severe, leading to fraud, identity theft, or large-scale data exposure. The system, therefore, becomes an attractive target for attackers.

To address such risks, eIDAS 2 mandates safeguards. Article 45k requires providers to maintain data integrity and chronological order in electronic ledgers, while regular audits and compliance checks ensure adherence to strict standards.

Furthermore, the regulation mandates open-source software for the wallet components, enhancing transparency and trust.

The challenge is to balance security, usability, and confidence. If the wallet is overly restrictive, citizens may resist adoption. If it is too permissive, privacy could be undermined.

The European approach aims to strike the delicate balance between trust and efficiency.

Practical implications across sectors with the EDIW

The European Digital Identity Wallet has the potential to reshape multiple sectors across the EU, and its relevance is already visible in national pilot projects as well as in existing electronic identification systems.

Public services stand to benefit most immediately. Citizens will be able to submit tax declarations, apply for social benefits, or enrol in universities abroad without needing paper-based procedures.

Healthcare is another area where digital identity is of great importance, since medical records can be transferred securely across borders.

Businesses are also likely to experience greater efficiency. Banks and financial institutions will be able to streamline compliance with the ‘Know Your Customer’ and anti-money laundering rules.

In the field of e-commerce, platforms can provide seamless authentication, which will reduce fraud and enhance customer trust.

Citizens will also enjoy greater convenience in their daily lives when signing rental contracts, proving identity while travelling, or accessing utilities and other services.

National approaches to digital identity across the EU

National experiences illustrate both diversity and progress. Let’s review some examples.

0JzKZNWx flags Figure 10 EU

Estonia has been recognised as a pioneer, having built a robust e-Identity system over two decades. Its citizens already use secure digital ID cards, mobile ID, and smart ID applications to access almost all government services online, meaning that integration with the EDIW will be relatively smooth.

Denmark has also made significant progress with its MitID solution, which replaced NemID and is now used by millions of citizens to access both public and private services with high security standards, including biometric authentication.

Germany has introduced BundID, a central portal for accessing public administration services, and has invested in enabling the use of national ID cards via NFC-based smartphones, although adoption is still limited compared to Scandinavian countries.

Italy has taken a different route by rolling out SPID, the Public Digital Identity System, which is now used by more than thirty-five million citizens to access thousands of services. The country also supports the Electronic Identity Card, known as CIE, and both solutions are being aligned with wallet requirements.

Spain has launched Cl@ve, a platform that combines permanent passwords and electronic certificates, and has joined several wallet pilot projects funded by the European Commission to test cross-border use.

France is developing its France Identité application, which allows the use of the electronic ID card for online authentication, and the project is at the centre of the national effort to meet European standards.

The Netherlands relies on DigiD, which provides access to healthcare, taxation, and education services. Although adoption is high, the system will require enhanced security features to meet the new regulations.

Greece has made significant strides in digital identity with the introduction of the Gov.gr Wallet. The mobile application allows citizens to store digital versions of their national identity card and driving licence on smartphones, giving them the same legal validity as physical documents in the country.

These varied examples reveal a mixed landscape. Countries such as Estonia and Denmark have developed advanced and widely used systems that will integrate readily with the European framework.

Others are still building broader adoption and enhancing their infrastructure. The wallet, therefore, offers an opportunity to harmonise national approaches, bridge existing gaps, and create a coherent European ecosystem.

By building on what already exists, member states can speed up adoption and deliver benefits to citizens and businesses in a consistent and trusted way.

Risks and limitations of the EDIW

Despite the promises, the rollout of the wallet faces significant challenges, several of which have already been highlighted in our analysis.

First, data privacy remains a concern. Citizens must trust that wallet providers and national authorities will not misuse or over-collect their data, especially given existing concerns about data breaches and increased surveillance across the Union. Any breach of that trust could significantly undermine adoption.

masked hacker under hood using computer to commit data breach crime

Second, Europe’s digital infrastructure remains uneven. Countries such as Estonia and Denmark (as mentioned earlier) already operate sophisticated e-ID systems, while others fall behind. Bridging the gap requires financial and technical support, as well as political will.

Third, balancing innovation with harmonisation is not easy. While technological neutrality allows for flexibility, too much divergence risks interoperability problems. The EU must carefully monitor implementation to avoid fragmentation.

Finally, there are long-term risks of over-centralisation. By placing so much reliance on a single tool, the EU may inadvertently create systemic vulnerabilities. Ensuring redundancy and diversity in digital identity solutions will be key to resilience.

Opportunities and responsibilities in the EU’s digital identity strategy

Looking forward, the success of eIDAS 2 and the wallet will depend on careful implementation and strong governance.

Opportunities abound. Scaling the wallet across sectors, from healthcare and education to transport and finance, could solidify Europe’s position as a global leader in digital identity. By extending adoption to the private sector, the EU can create a thriving ecosystem of secure, trusted services.

Yet the initiative requires continuous oversight. Cyber threats evolve rapidly, and regulatory frameworks must adapt. Ongoing audits, updates, and refinements will be necessary to keep pace. Member states will need to share best practices and coordinate closely to ensure consistent standards.

At a broader level, the wallet represents a step toward digital sovereignty. By reducing reliance on non-European identity providers and platforms, the EU strengthens its control over the digital infrastructure underpinning its economy. In doing so, it enhances both competitiveness and resilience.

The EU’s leap toward a digitally sovereign future

In conclusion, we firmly believe that the adoption of eIDAS 2 and the rollout of the European Digital Identity Wallet mark a decisive step in Europe’s digital transformation.

By providing a secure, interoperable, and user-friendly framework, the EU has created the conditions for greater trust, efficiency, and cross-border collaboration.

The benefits are clear. Citizens gain convenience and control, businesses enjoy streamlined operations, and governments enhance security and transparency.

But we have to keep in mind that challenges remain, from uneven national infrastructures to concerns over data privacy and cybersecurity.

eu cybersecurity standards

Ultimately, eIDAS 2 is both a legal milestone and a technological experiment. Its success will depend on building and maintaining trust, ensuring inclusivity, and adapting to emerging risks.

If the EU can meet the challenges, the European Digital Identity Wallet will not only transform the daily lives of millions of its citizens but also serve as a model for digital governance worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Stablecoins unlocking crypto adoption and AI economies

Stablecoins have rapidly risen as one of the most promising breakthroughs in the cryptocurrency world. They are neither traditional currency nor the first thing that comes to mind when thinking about crypto; instead, they represent a unique blend of both worlds, combining the stability of fiat with the innovation of digital assets.

In a market often known for wild price swings, stablecoins offer fresh air, enabling practical use of cryptocurrencies for real-world payments and commerce. The real question is, are stablecoins destined to bring crypto into everyday use and unlock their full potential for the masses?

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoin regulation: How global rules drive adoption

Regulators worldwide are stepping up to define clear rules for stablecoins, signalling growing market maturity and increasing confidence from major financial institutions. Recent legislative efforts across multiple jurisdictions aim to establish firm standards such as full reserves, audits, and licensing requirements, encouraging banks and asset managers to engage more confidently with stablecoins. 

These coordinated global moves go beyond simple policy updates; they are laying the foundation for stablecoins to evolve from niche crypto assets to trusted pillars of the future financial ecosystem. Regulators and industry leaders are thus bringing cryptocurrencies closer to everyday users and embedding them into daily financial life. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Corporations and banks embracing stablecoins: A paradigm shift

The adoption of stablecoins by big corporations and banks marks a significant turning point, and, in some ways, a paradox. Once seen as an enemy of decentralised finance, these institutions now seem to be conceding and joining the movement they once resisted – what you fail to control – can ultimately win. 

Retail giants such as Walmart and Amazon are reportedly exploring their stablecoin initiatives to streamline payments and foster deeper customer engagement. On the banking side, institutions like Bank of America, JPMorgan Chase, and Citigroup are developing or assessing stablecoins to integrate crypto-friendly services into their offerings.

Western Union is also experimenting with stablecoin solutions to reduce remittance costs and increase transaction speed, particularly in emerging markets with volatile currencies. 

They all realise that staying competitive means adapting to the latest shifts in global finance. Such corporate interest signals that stablecoins are transitioning from speculative assets to functional money-like assets capable of handling everyday transactions across orders and demographics. 

There is also a sociological dimension to stablecoins’ corporate and institutional embrace. Established institutions bring an inherent trust that can alleviate the scepticism surrounding cryptocurrencies.

By linking stablecoins to familiar brands and regulated banks, these digital tokens can overcome cultural and psychological barriers that have limited crypto adoption, ultimately embedding digital currencies into the fabric of global commerce.

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoins and the rise of AI-driven economies

Stablecoins are increasingly becoming the financial backbone of AI-powered economic systems. As AI agents gain autonomy to transact, negotiate, and execute tasks on behalf of individuals and businesses, they require a reliable, programmable, and instantly liquid currency.

Stablecoins perfectly fulfil this role, offering near-instant settlement, low transaction costs, and transparent, trustless operations on blockchain networks. 

In the emerging ‘self-driving economy’, stablecoins may be the preferred currency for a future where machines transact independently. Integrating programmable money with AI may redefine the architecture of commerce and governance. Such a powerful synergy is laying the groundwork for economic systems that operate around the clock without human intervention. 

As AI technology continues to advance rapidly, the demand for stablecoins as the ideal ‘AI money’ will likely accelerate, further driving crypto adoption across industries. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

The bridge between crypto and fiat economies

From a financial philosophy standpoint, stablecoins represent an attempt to synthesise the advantages of decentralisation with the stability and trust associated with fiat money. They aim to combine the freedom and programmability of blockchain with the reassurance of stable value, thereby lowering entry barriers for a wider audience.

On a global scale, stablecoins have the potential to revolutionise cross-border payments, especially benefiting countries with unstable currencies and limited access to traditional banking. 

Sociologically, stablecoins could redefine the way societies perceive money and trust. Moving away from centralised authorities controlling currency issuance, these tokens leverage transparent blockchain ledgers that anyone can verify. The shift challenges traditional power structures and calls for new forms of economic participation based on openness and accessibility.

Yet challenges remain: stablecoins must navigate regulatory scrutiny, develop secure infrastructure, and educate users worldwide. The future will depend on balancing innovation, safety, and societal acceptance – it seems like we are still in the early stages.

Perhaps stablecoins are not just another financial innovation, but a mirror reflecting our shifting relationship with money, trust, and control. If the value we exchange no longer comes from paper, metal, or even banks, but from code, AI, and consensus, then perhaps the real question is whether their rise marks the beginning of a new financial reality – or something we have yet to fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The end of the analogue era and the cognitive rewiring of new generations

Navigating a world beyond analogue

The digital transformation of daily life represents more than just a change in technological format. It signals a deep cultural and cognitive reorientation.

Rather than simply replacing analogue tools with digital alternatives, society has embraced an entirely new way of interacting with information, memory, time, and space.

For younger generations born into this reality, digital mediation is not an addition but the default mode of experiencing the world. A redefinition like this introduces not only speed and convenience but also cognitive compromises, cultural fragmentation, and a fading sense of patience and physical memory.

Generation Z as digital natives

Generation Z has grown up entirely within the digital realm. Unlike older cohorts who transitioned from analogue practices to digital habits, members of Generation Z were born into a world of touchscreen interfaces, search engines, and social media ecosystems.

As Generation Z enters the workforce, the gap between digital natives and older generations is becoming increasingly apparent. For them, technology has never been a tool to learn. It has always been a natural extension of their daily life.

young university students using laptop and studying with books in library school education concept

The term ‘digital native’, first coined by Marc Prensky in 2001, refers precisely to those who have never known a world without the internet. Rather than adapting to new tools, they process information through a technology-first lens.

In contrast, digital immigrants (those born before the digital boom) have had to adjust their ways of thinking and interacting over time. While access to technology might be broadly equal across generations in developed countries, the way individuals engage with it differs significantly.

Instead of acquiring digital skills later in life, they developed them alongside their cognitive and emotional identities. This fluency brings distinct advantages. Young people today navigate digital environments with speed, confidence, and visual intuition.

They can synthesise large volumes of information, switch contexts rapidly, and interact across multiple platforms with ease.

The hidden challenges of digital natives

However, the native digital orientation also introduces unique vulnerabilities. Information is rarely absorbed in depth, memory is outsourced to devices, and attention is fragmented by endless notifications and competing stimuli.

While older generations associate technology with productivity or leisure, Generation Z often experiences it as an integral part of their identity. The integration can obscure the boundary between thought and algorithm, between agency and suggestion.

Being a digital native is not just a matter of access or skill. It is about growing up with different expectations of knowledge, communication, and identity formation.

Memory and cognitive offloading: Access replacing retention

In the analogue past, remembering involved deliberate mental effort. People had to memorise phone numbers, use printed maps to navigate, or retrieve facts from memory rather than search engines.

The rise of smartphones and digital assistants has allowed individuals to delegate that mental labour to machines. Instead of internalising facts, people increasingly learn where and how to access them when needed, a practice known as cognitive offloading.

digital brain

Although the shift can enhance decision-making and productivity by reducing overload, it also reshapes the way the brain handles memory. Unlike earlier generations, who often linked memories to physical actions or objects, younger people encounter information in fast-moving and transient digital forms.

Memory becomes decentralised and more reliant on digital continuity than on internal recall. Rather than cognitive decline, this trend marks a significant restructuring of mental habits.

Attention and time: From linear focus to fragmented awareness

The analogue world demanded patience. Sending a letter meant waiting for days, rewinding a VHS tape required time, and listening to an album involved staying on the same set of songs in a row.

Digital media has collapsed these temporal structures. Communication is instant, entertainment is on demand, and every interface is designed to be constantly refreshed.

Instead of promoting sustained focus, digital environments often encourage continuous multitasking and quick shifts in attention. App designs, with their alerts, pop-ups, and endless scrolling, reinforce a habit of fragmented presence.

Studies have shown that multitasking not only reduces productivity but also undermines deeper understanding and reflection. Many younger users, raised in this environment, may find long periods of undivided attention unfamiliar or even uncomfortable.

The lost sense of the analogue

Analogue interactions involved more than sight and sound. Reading a printed book, handling vinyl records, or writing with a pen engaged the senses in ways that helped anchor memory and emotion. These physical rituals provided context and reinforced cognitive retention.

highlighter in male hand marked text in book education concept

Digital experiences, by contrast, are streamlined and screen-bound. Tapping icons and swiping a finger across glass lack the tactile diversity of older tools. Sensory uniformity might lead to a form of experiential flattening, where fewer physical cues are accessible to strengthen memory.

Digital photography lacks the permanence of a printed one, and music streamed online does not carry the same mnemonic weight as a cherished cassette or CD once did.

From communal rituals to personal streams

In the analogue era, media consumption was more likely to be shared. Families gathered around television sets, music was enjoyed communally, and photos were stored in albums passed down across generations.

These rituals helped synchronise cultural memory and foster emotional continuity and a sense of collective belonging.

The digital age favours individualised streams and asynchronous experiences. Algorithms personalise every feed, users consume content alone, and communication takes place across fragmented timelines.

While young people have adapted with fluency, creating their digital languages and communities, the collective rhythm of cultural experience is often lost.

People no longer share the same moment. They now experience parallel narratives shaped by personal profiles and rather than social connections.

Digital fatigue and social withdrawal

However, as the digital age reaches a point of saturation, younger generations are beginning to reconsider their relationship with the online world.

While constant connectivity dominates modern life, many are now striving to reclaim physical spaces, face-to-face interactions, and slower forms of communication.

In urban centres, people often navigate large, impersonal environments where community ties are weak and digital fatigue is contributing to a fresh wave of social withdrawal and isolation.

Despite living in a world designed to be more connected than ever before, younger generations are increasingly aware that a screen-based life can amplify loneliness instead of resolving it.

But the withdrawal from digital life has not been without consequences.

Those who step away from online platforms sometimes find themselves excluded from mainstream social, political, or economic systems.

Others struggle to form stable offline relationships because digital interaction has long been the default. Both groups would probably say that it feels like living on a razor’s edge.

Education and learning in a hybrid cognitive landscape

Education illustrates the analogue-to-digital shift with particular clarity. Students now rely heavily on digital sources and AI for notes, answers, and study aids.

The approach offers speed and flexibility, but it can also hinder the development of critical thinking and perseverance. Rather than engaging deeply with material, learners may skim or rely on summarised content, weakening their ability to reason through complex ideas.

ChatGPT students Jocelyn Leitzinger AI in education

Educators must now teach not only content but also digital self-awareness. Helping students understand how their tools shape their learning is just as important as the tools themselves.

A balanced approach that includes reading physical texts, taking handwritten notes, and scheduling offline study can help cultivate both digital fluency and analogue depth. This is not a nostalgic retreat, but a cognitive necessity.

Intergenerational perception and diverging mental norms

Older and younger generations often interpret each other through the lens of their respective cognitive habits. What seems like a distraction or dependency to older adults may be a different but functional way of thinking to younger people.

It is not a decline in ability, but an adaptation. Ultimately, each generation develops in response to the tools that shape its world.

Where analogue generations valued memorisation and sustained focus, digital natives tend to excel in adaptability, visual learning, and rapid information navigation.

multi generation family with parents using digital tablet with daughter at home

Bridging the gap means fostering mutual understanding and encouraging the retention of analogue strengths within a digital framework. Teaching young people to manage their attention, question their sources, and reflect deeply on complex issues remains vital.

Preserving analogue values in a digital world

The end of the analogue era involves more than technical obsolescence. It marks the disappearance of practices that once encouraged mindfulness, slowness, and bodily engagement.

Yet abandoning analogue values entirely would impoverish our cognitive and cultural lives. Incorporating such habits into digital living can offer a powerful antidote to distraction.

Writing by hand, spending time with printed books, or setting digital boundaries should not be seen as resistance to progress. Instead, these habits help protect the qualities that sustain long-term thinking and emotional presence.

Societies must find ways to integrate these values into digital systems and not treat them as separate or inferior modes.

Continuity by blending analogue and digital

As we have already mentioned, younger generations are not less capable than those who came before; they are simply attuned to different tools.

The analogue era may be gone for good, but its qualities need not be lost. We can preserve its depth, slowness, and shared rituals within a digital (or even a post-digital) world, using them to shape more balanced minds and more reflective societies.

To achieve something like this, education, policy, and cultural norms should support integration. Rather than focus solely on technical innovation, attention must also turn to its cognitive costs and consequences.

Only by adopting a broader perspective on human development can we guarantee that future generations are not only connected but also highly aware, capable of critical thinking, and grounded in meaningful memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How are we being tracked online?

What impact does tracking have?

In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections). 

Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking. 

 Architecture, Building, House, Housing, Staircase, Art, Painting, Person, Modern Art

Image taken from the Internet Archive

This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity). 

A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance. 

How are we tracked? The case of cookies and device fingerprinting

  • Cookies

Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).

If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.

  • Device fingerprinting 

Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.

Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.

 Person, Clothing, Footwear, Shoe

Image taken from Lan Sweeper

A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data. 

Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.

Legal considerations

Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.

 Page, Text, Letter

Articles 7 and 8 of the Charter of Fundamental Rights

For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).

Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not. 

Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.

In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails. 

Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.

Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners. 

Can we refuse tracking

In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.

  • Cookies

Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.

This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.

Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.

When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought. 

 Person, Food, Sweets, Head, Computer, Electronics

Image taken from Buddy Company

  • Google Consent Mode

Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.

This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.

The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.

Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis. 

What next?

  • On an individual level: 

Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.

Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.  

  • On a systemic level: 

CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.

 Pen, Computer, Electronics, Laptop, Pc, Adult, Male, Man, Person, Cup, Disposable Cup, Text

However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.

For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.

Conclusion

Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.

Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).

Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Not just bugs: What rogue chatbots reveal about the state of AI

From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.

As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.

Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.

So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?

Tay’s 24-hour meltdown

Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.

Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.

But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.

Microsoft, Tay, AI chatbot, TayTweets, Xiaoice, Twitter
Tay’s playful nature had everyone fooled in the beginning.

Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.

Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.

Meta’s BlenderBot 3 goes off-script

In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.

With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.

Mark Zuckerberg, Meta, BlenderBot 3, AI, chatbot
Meta’s AI dominance plans had to be put on hold.

BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.

Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.

Is anyone in there? LaMDA and the sentience scare

As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.

Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.

LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.

What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.

Blake Lemoine, LaMDA, Google, AI, sentience
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech. Screenshot / YouTube / Center for Natural and Artificial Intelligence

Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.

His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.

Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?

Sydney and the limits of alignment

In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.

Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’

The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.

GPT-4, Microsoft Prometheus, Sydney, AI chatbot
Microsoft’s plans for Sydney were ambitious, but unrealistic.

Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.

While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.

Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.

Unfiltered and unhinged: Grok’s descent into chaos

In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.

Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.

However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.

Grok, Elon Musk, AI, chatbot, X, Twitter
Grok’s eventful meltdown left the community stunned. Screenshot / YouTube / Elon Musk Editor

As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.

Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.

Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.

In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.

AI needs boundaries, not just brains

As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.

The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.

To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!