Albania has made history by introducing the world’s first AI government minister, named Diella, who gave her inaugural address to parliament this week. Appearing in a video as a woman in traditional Albanian dress, Diella defended her appointment by stressing she was ‘not here to replace people, but to help them.’
She also dismissed accusations of being ‘unconstitutional,’ saying the real threat to the constitution comes from ‘inhumane decisions of those in power.’ Prime Minister Edi Rama announced that the AI minister will oversee all public tenders, promising full transparency and a corruption-free process.
The move comes as Albania struggles with corruption scandals, including the detention of Tirana’s mayor on charges of money laundering and abuse of contracts. Albania currently ranks 80th out of 180 countries on Transparency International’s corruption index.
VIDEO: 🇦🇱 Albania's new AI-generated minister makes first address to parliament
The world's first AI government minister defended her role as "not here to replace people, but to help them". 'Diella' was appointed by Prime Minister Edi Rama last week pic.twitter.com/2dFh082DLz
The opposition, however, fiercely rejected the initiative. Former prime minister and Democratic Party leader Sali Berisha called the project a publicity stunt, warning that Diella cannot curb corruption and that it is unconstitutional. The opposition has vowed to challenge the appointment in the Constitutional Court after boycotting the parliamentary vote.
Despite the controversy, the government insists the AI minister reflects its commitment to reform and the EU integration. Rama has set an ambitious goal of leading Albania, a nation of 2.8 million, into the European Union by 2030, with the fight against corruption at the heart of that mission.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.
Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.
Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.
Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.
The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
CARU found that some videos on the MrBeast YouTube channel included undisclosed advertising in descriptions and pinned comments, which could mislead young viewers.
It also raised concerns about a promotional taste test for Feastables chocolate bars, which appeared to children as a valid comparison despite lacking a scientific basis.
Investigators said Feastables sweepstakes failed to clearly disclose free entry options, minimum age requirements and the actual odds of winning. Promotions were also criticised for encouraging excessive purchases and applying sales pressure, such as countdown timers urging children to buy more chocolate.
Privacy issues were also identified, with Feastables collecting personal data from under-13s without parental consent. CARU noted the absence of an effective age gate and highlighted that information provided via popups was sent to third parties.
MrBeast and Feastables said many of the practices under review had already been revised or discontinued, but pledged to take CARU’s recommendations into account in future campaigns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Global AI spending is projected to reach $1.5 trillion in 2025 and exceed $2 trillion in 2026, yet a critical element is missing: human judgement. A growing number of organisations are turning to behavioural science to bridge this gap, coding it directly into AI systems to create what experts call behavioural AI.
Early adopters like Clarity AI utilise behavioural AI to flag ESG controversies before they impact earnings. Morgan Stanley uses machine learning and satellite data to monitor environmental risks, while Google Maps influences driver behaviour, preventing over one million tonnes of CO₂ annually.
Behavioural AI is being used to predict how leaders and societies act under uncertainty. These insights guide corporate strategy, PR campaigns, and decision-making. Mind Friend combines a network of 500 mental health experts with AI to build a ‘behavioural infrastructure’ that enhances judgement.
The behaviour analytics market was valued at $1.1 billion in 2024 and is projected to grow to $10.8 billion by 2032. Major players, such as IBM and Adobe, are entering the field, while Davos and other global forums debate how behavioural frameworks should shape investment and policy decisions.
As AI scrutiny grows, ethical safeguards are critical. Companies that embed governance, fairness, and privacy protections into their behavioural AI are earning trust. In a $2 trillion market, winners will be those who pair algorithms with a deep understanding of human behaviour.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.
ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.
Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.
The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.
A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Te Whatu Ora (the healthcare system of New Zealand) has appointed Sonny Taite as acting director of innovation and AI and launched a new programme called HealthX.
An initiative that aims to deliver one AI-driven healthcare project each month from September 2025 until February 2026, based on ideas from frontline staff instead of new concepts.
Speaking at the TUANZ and DHA Tech Users Summit in Auckland, New Zealand, Taite explained that HealthX will focus on three pressing challenges: workforce shortages, inequitable access to care, and clinical inefficiencies.
He emphasised the importance of validating ideas, securing funding, and ensuring successful pilots scale nationally.
The programme has already tested an AI-powered medical scribe in the Hawke’s Bay emergency department, with early results showing a significant reduction in administrative workload.
Taite is also exploring solutions for specialist shortages, particularly in dermatology, where some regions lack public services, forcing patients to travel or seek private care.
A core cross-functional team, a clinical expert group, and frontline champions such as chief medical officers will drive HealthX.
Taite underlined that building on existing cybersecurity and AI infrastructure at Te Whatu Ora, which already processes billions of security signals monthly, provides a strong foundation for scaling innovation across the health system.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The company has already invested over $1 billion in digital infrastructure, including subsea cable projects such as Equiano and Umoja, enabling 100 million people to come online for the first time. Four new regional cable hubs are being established to boost connectivity and resilience further.
Alongside infrastructure, Google will provide college students in eight African countries with a free one-year subscription to Google AI Pro. The tools, including Gemini 2.5 Pro and Guided Learning, are designed to support research, coding, and problem-solving.
By 2030, Google says it intends to reach 500 million Africans with AI-powered innovations tackling issues such as crop resilience, flood forecasting and access to education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.
The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.
The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.
The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.
With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.
The trigger: A registration ultimatum meets a hyper-online society
The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.
Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.
The movement: Gen Z logistics in a blackout world
What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:
Alt-messaging and community hubs: With legacy apps dark, Discord emerged as a ‘virtual control room,’ a natural fit for a generation raised in multiplayer servers. Despite the ban, the movement’s core group (Hami Nepal) organised on Discord and Instagram. Several Indian outlets and the Times of India claimed that more than 100,000 users converged in sprawling voice and text channels to debate leadership choices during the transition.
Peer-to-peer and ‘mesh’ apps: Encrypted, Bluetooth-based tools, prominently Bitchat, covered by mainstream and crypto-trade press, saw a burst of downloads as protest organisers prepared for intermittent internet access and cellular throttling. The appeal was simple: it works offline, hops device-to-device, and is harder to block.
Locally registered holdouts: Because TikTok and Viber had registered with MoCIT, they remained online and quickly became funnels for updates, citizen journalism and short-form explainers about where to assemble and how to avoid police cordons. Nepal Police’s Cyber Bureau, alarmed by the VPN stampede, publicly warned users about indiscriminate VPN use and data-theft risks; advice that landed with little force once crowds were already in the streets.
The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.
The law: What Nepal is trying to regulate and why it backfired?
Mandatory registration with MoCIT and local point-of-contact;
Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
Data cooperation requirements with domestic authorities;
Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.
Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.
Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.
There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).
The platforms: two weeks of reputational whiplash
Meta: after months of criticism for ignoring registration notices, it still has not registered in Nepal and remains out of compliance with the government’s requirements from the Social Media Bill 2081.
TikTok, banned in 2023 for ‘social harmony’ concerns and later restored after agreeing to compliance, found itself on the legal side of the ledger this time; it stayed up and became a publishing artery for youth explainers and police-abuse documentation.
VPN providers, especially Proton, earned folk-hero status. The optics of an ‘8,000% surge’ became shorthand for resilience.
Discord shifted from gamer space to civic nerve centre, a recurring pattern from Hong Kong to Myanmar that Nepal echoed in miniature. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner. The same features that facilitate raids and speed-runs, voice, low-latency presence, and channel hierarchies, make for a capable ad-hoc command room. The Hami Nepal group’s role in the event’s transitional politics underscores that shift.
The economy and institutions: Damage, then restraint
The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.
On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.
Bottom line
Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.
If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.
Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.
The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.
Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.
Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.
SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!