Techno(demo)cracy in action: How a five-day app blackout lit a Gen Z online movement in Nepal

Over the past two weeks, Nepal’s government has sought the right decision to regulate online space. The brought decision prompted a large, youth-led response. A government’s order issued on 4 September blocked access to 26 social platforms, from Facebook, Instagram and YouTube to X and WhatsApp, after the companies failed to register locally under Nepal’s rules for digital services. Within the next five days, authorities lifted the ban, but it was too late: tens of thousands of mostly young Nepalis, organized with VPNs, alternative chat apps and gaming-era coordination tools, forced a political reckoning that culminated in the burning of parts of the parliament complex, the resignation of Prime Minister K.P. Sharma Oli on 9 September, and the appointment of former chief justice Sushila Karki to lead an interim administration.

The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.

The trigger: A registration ultimatum meets a hyper-online society

The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.

Against that backdrop, the cabinet and courts have put into practice the bill draft. On 28 August 2025, authorities gave major platforms seven days to register with the Ministry of Communication and Information Technology (MoCIT); on 4 September, the telecom regulator moved to block unregistered services. Nepal’s government listed the 26 services covered by the order (including Facebook, Instagram, X, WhatsApp, YouTube, Reddit, Snapchat and others), while TikTok, Viber, Witk, Nimbuzz and Popo Live had registered and were allowed to operate. Two more (Telegram and Global Diary) were under review.

Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.

The movement: Gen Z logistics in a blackout world

What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:

The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.

 Chart, Plot, Map, Atlas, Diagram

The law: What Nepal is trying to regulate and why it backfired?

The draft Social Media Bill 2081 and the 2023 Directive share a broad structure:

  • Mandatory registration with MoCIT and local point-of-contact;
  • Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
  • Data cooperation requirements with domestic authorities;
  • Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.

Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.

Regarding penalties, public discussion compared platform fines with user-level sanctions and general cybercrime provisions. Available news info suggests proposed platform-side fines up to roughly USD 17,000 (EUR 15,000) for operating without authorisation, while user-level offences (e.g. phishing, deepfakes, certain categories of misinformation) carry fines up to USD 2,000–3,500 and potential jail terms depending on the offence. 

The demographics: Who showed up, and why them?

Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.

There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).

The platforms: two weeks of reputational whiplash

 Person, Art, Graphics, Clothing, Footwear, Shoe, Book, Comics, Publication

The economy and institutions: Damage, then restraint

The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.

On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.

Bottom line

Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.

If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon and Mercado Libre criticised for limiting seller mobility in Mexico

Mexico’s competition watchdog has accused Amazon and Mercado Libre of erecting barriers that limit the mobility of sellers in the country’s e-commerce market. The two platforms reportedly account for 85% of the seller market.

The Federal Economic Competition Commission (COFECE) stated that the companies provide preferential treatment to sellers who utilise their logistics services and fail to disclose how featured offers are selected, thereby restricting fair competition.

Despite finding evidence of these practices, COFECE stopped short of imposing corrective measures, citing a lack of consensus among stakeholders. Amazon welcomed the decision, saying it demonstrates the competitiveness of the retail market in Mexico.

The watchdog aims to promote a more dynamic e-commerce sector, benefiting buyers and sellers. Its February report had recommended measures to improve transparency, separate loyalty programme services, and allow fairer access to third-party delivery options.

Trade associations praised COFECE for avoiding sanctions, warning that penalties could harm consumers and shield traditional retailers. Mercado Libre has not yet commented on the findings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement gears up with 15 authorities named in Ireland

Ireland has designated 15 authorities to monitor compliance with the EU’s AI Act, making it one of the first EU countries fully ready to enforce the new rules. The AI Act regulates AI systems according to their risk to society and began phasing in last year.

Governments had until 2 August to notify the European Commission of their appointed market surveillance authorities. In Ireland, these include the Central Bank, Coimisiún na Meán, the Data Protection Commission, the Competition and Consumer Protection Commission, and the Health and Safety Authority.

The country will also establish a National AI Office as the central coordinator for AI Act enforcement and liaise with EU institutions. A single point of contact must be designated where multiple authorities are involved to ensure clear communication.

Ireland joins Cyprus, Latvia, Lithuania, Luxembourg, Slovenia, and Spain as countries that have appointed their contact points. The Commission has not yet published the complete list of authorities notified by member states.

Former Italian Prime Minister Mario Draghi has called for a pause in the rollout of the AI Act, citing risks and a lack of technical standards. The Commission has launched a consultation as part of its digital simplification package, which will be implemented in December.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan investigates X for non-compliance with the harmful content law

Japanese regulators are reviewing whether the social media platform X fails to comply with new content removal rules.

The law, which took effect in April, requires designated platforms to allow victims of harmful online posts to request deletion without facing unnecessary obstacles.

X currently obliges non-users to register an account before they can file such requests. Officials say that it could represent an excessive burden for victims who violate the law.

The company has also been criticised for not providing clear public guidance on submitting removal requests, prompting questions over its commitment to combating online harassment and defamation.

Other platforms, including YouTube and messaging service Line, have already introduced mechanisms that meet the requirements.

The Ministry of Internal Affairs and Communications has urged all operators to treat non-users like registered users when responding to deletion demands. Still, X and the bulletin board site bakusai.com have yet to comply.

As said, it will continue to assess whether X’s practices breach the law. Experts on a government panel have called for more public information on the process, arguing that awareness could help deter online abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West London borough approves AI facial recognition CCTV rollout

Hammersmith and Fulham Council has approved a £3m upgrade to its CCTV system to see facial recognition and AI integrated across the west London borough.

With over 2,000 cameras, the council intends to install live facial recognition technology at crime hotspots and link it with police databases for real-time identification.

Alongside the new cameras, 500 units will be equipped with AI tools to speed up video analysis, track vehicles, and provide retrospective searches. The plans also include the possible use of drones, pending approval from the Civil Aviation Authority.

Council leader Stephen Cowan said the technology will provide more substantial evidence in a criminal justice system he described as broken, arguing it will help secure convictions instead of leaving cases unresolved.

Civil liberties group Big Brother Watch condemned the project as mass surveillance without safeguards, warning of constant identity checks and retrospective monitoring of residents’ movements.

Some locals also voiced concern, saying the cameras address crime after it happens instead of preventing it. Others welcomed the move, believing it would deter offenders and reassure those who feel unsafe on the streets.

The Metropolitan Police currently operates one pilot site in Croydon, with findings expected later in the year, and the council says its rollout depends on continued police cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood studios take legal action against MiniMax for AI copyright infringement

Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.

The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.

According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.

MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.

Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.

They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.

The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.

MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ghana launches national privacy campaign

Ghana has launched the National Privacy Awareness Campaign, a year-long initiative to strengthen citizens’ privacy rights and build public trust in the country’s expanding digital ecosystem.

Unveiled by Deputy Minister Mohammed Adams Sukparu, the campaign emphasises that data protection is not just a legal requirement but essential to innovation, digital participation, and Ghana’s goal of becoming Africa’s AI hub.

The campaign will run from September 2025 to September 2026 across all 16 regions, using English and key local languages to promote widespread awareness.

The initiative includes the inauguration of the Ghana Association of Privacy Professionals (GAPP) and recognition of new Certified Data Protection Officers, many trained through the One Million Coders Programme.

Officials stressed that effective data governance requires government, private sector, civil society, and media collaboration. The Data Protection Commission reaffirmed its role in protecting privacy while noting ongoing challenges such as limited awareness and skills gaps.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches CAF 4.0 for cybersecurity

The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.

An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.

Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.

AI-related cyber risks are also now covered more thoroughly throughout the framework.

The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.

Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.

Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!