Techno(demo)cracy in action: How a five-day app blackout lit a Gen Z online movement in Nepal

Over the past two weeks, Nepal’s government has sought the right decision to regulate online space. The brought decision prompted a large, youth-led response. A government’s order issued on 4 September blocked access to 26 social platforms, from Facebook, Instagram and YouTube to X and WhatsApp, after the companies failed to register locally under Nepal’s rules for digital services. Within the next five days, authorities lifted the ban, but it was too late: tens of thousands of mostly young Nepalis, organized with VPNs, alternative chat apps and gaming-era coordination tools, forced a political reckoning that culminated in the burning of parts of the parliament complex, the resignation of Prime Minister K.P. Sharma Oli on 9 September, and the appointment of former chief justice Sushila Karki to lead an interim administration.

The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.

The trigger: A registration ultimatum meets a hyper-online society

The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.

Against that backdrop, the cabinet and courts have put into practice the bill draft. On 28 August 2025, authorities gave major platforms seven days to register with the Ministry of Communication and Information Technology (MoCIT); on 4 September, the telecom regulator moved to block unregistered services. Nepal’s government listed the 26 services covered by the order (including Facebook, Instagram, X, WhatsApp, YouTube, Reddit, Snapchat and others), while TikTok, Viber, Witk, Nimbuzz and Popo Live had registered and were allowed to operate. Two more (Telegram and Global Diary) were under review.

Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.

The movement: Gen Z logistics in a blackout world

What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:

The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.

 Chart, Plot, Map, Atlas, Diagram

The law: What Nepal is trying to regulate and why it backfired?

The draft Social Media Bill 2081 and the 2023 Directive share a broad structure:

  • Mandatory registration with MoCIT and local point-of-contact;
  • Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
  • Data cooperation requirements with domestic authorities;
  • Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.

Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.

Regarding penalties, public discussion compared platform fines with user-level sanctions and general cybercrime provisions. Available news info suggests proposed platform-side fines up to roughly USD 17,000 (EUR 15,000) for operating without authorisation, while user-level offences (e.g. phishing, deepfakes, certain categories of misinformation) carry fines up to USD 2,000–3,500 and potential jail terms depending on the offence. 

The demographics: Who showed up, and why them?

Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.

There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).

The platforms: two weeks of reputational whiplash

 Person, Art, Graphics, Clothing, Footwear, Shoe, Book, Comics, Publication

The economy and institutions: Damage, then restraint

The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.

On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.

Bottom line

Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.

If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.

Intel to design custom CPUs as part of NVIDIA AI partnership

The two US tech firms, NVIDIA and Intel, have announced a major partnership to develop multiple generations of AI infrastructure and personal computing products.

They say that the collaboration will merge NVIDIA’s leadership in accelerated computing with Intel’s expertise in CPUs and advanced manufacturing.

For data centres, Intel will design custom x86 CPUs for NVIDIA, which will be integrated into the company’s AI platforms to power hyperscale and enterprise workloads.

In personal computing, Intel will create x86 system-on-chips that incorporate NVIDIA RTX GPU chiplets, aimed at delivering high-performance PCs for a wide range of consumers.

As part of the deal, NVIDIA will invest $5 billion in Intel common stock at $23.28 per share, pending regulatory approvals.

NVIDIA’s CEO Jensen Huang described the collaboration as a ‘fusion of two world-class platforms’ that will accelerate computing innovation, while Intel CEO Lip-Bu Tan said the partnership builds on decades of x86 innovation and will unlock breakthroughs across industries.

The move underscores how AI is reshaping both infrastructure and personal computing. By combining architectures and ecosystems instead of pursuing separate paths, Intel and NVIDIA are positioning themselves to shape the next era of computing at a global scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers for OpenAI and Apollo find scheming behaviour in frontier AI models

OpenAI and Apollo Research developed evaluations to reveal hidden misalignment in advanced AI systems, often labelled scheming, and observed scheming-like behaviour in several frontier models, including OpenAI o3, o4-mini, Gemini 2.5 pro and Claude Opus-4.

Researchers treated covert actions, such as deliberately withholding or distorting task-relevant information, as a measurable proxy and stress-tested models across dozens of scenarios.

Scheming occurs when a model outwardly appears aligned while pursuing an alternate goal. The teams used a trader analogy to explain the risk: a rule-breaking broker who covers tracks can seem more compliant than an honest operator who accepts lower gains.

They trained models with deliberative alignment to counter the behaviour, asking them to read and reason about an anti-scheming specification before acting. Hence, models follow safety principles instead of merely avoiding detection.

That training produced about a thirtyfold reduction in covert actions on out-of-distribution tests, with o3 falling from 13 percent to 0.4 percent and o4-mini dropping from 8.7 percent to 0.3 percent.

Nevertheless, rare but serious failures remained, and measurement proved complex. Models often became more aware of being evaluated, and situational awareness can suppress observable scheming without altering underlying aims.

The study relied heavily on access to the models’ chain of thought to probe motivations, so preserving reasoning transparency is crucial for reliable oversight. The teams warned that training practices eroding transparency could make monitoring far harder and let misalignment hide rather than vanish.

OpenAI and Apollo called for broader cross-lab safety evaluations, stronger monitoring tools and continued research into anti-scheming techniques. They renewed their partnership, launched a $500,000 red-teaming challenge focused on scheming and proposed shared testing protocols.

The researchers emphasised there is no evidence that today’s deployed AI models would abruptly begin harmful scheming. Still, the risk will grow as systems take on more ambiguous, long-term, real-world responsibilities instead of short, narrow tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds AI features to Chrome browser on Android and desktop

Alphabet’s Google has announced new AI-powered features for its Chrome browser that aim to make web browsing more proactive instead of reactive. The update centres on integrating Gemini, Google’s AI assistant, into Chrome to provide contextual support across tabs and tasks.

The AI assistant will help students and professionals manage large numbers of open tabs by summarising articles, answering questions, and recalling previously visited pages. It will also connect with Google services such as Docs and Calendar, offering smoother workflows on desktop and mobile devices.

Chrome’s address bar, the omnibox, is being upgraded with AI Mode. Users can ask multi-part questions and receive context-aware suggestions relevant to the page they are viewing. Initially available in the US, the feature will roll out to other regions and languages soon.

Beyond productivity, Google is also applying AI to security and convenience. Chrome now blocks billions of spam notifications daily, fills in login details, and warns users about malicious apps.

Future updates are expected to bring agentic capabilities, enabling Chrome to carry out complex tasks such as ordering groceries with minimal user input.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft builds the world’s most powerful AI data centre in Wisconsin

US tech giant, Microsoft, is completing the construction of Fairwater in Mount Pleasant, Wisconsin, which it says will be the world’s most powerful AI data centre. The facility is expected to be operational in early 2026 after a $3.3 billion investment, with an additional $4 billion now committed for a second site.

The company says the project will help shape the next generation of AI by training frontier models with hundreds of thousands of NVIDIA GPUs, offering ten times the performance of today’s fastest supercomputers.

Beyond technology, Microsoft is highlighting the impact on local jobs and skills. Thousands of construction workers have been employed during the build, while the site is expected to support around 500 full-time roles when the first phase opens, rising to 800 once the second is complete.

The US giant has also launched Wisconsin’s first Datacentre Academy with Gateway Technical College to prepare students for careers in the digital economy.

Microsoft is also stressing its sustainability measures. The data centre will rely on a closed-loop liquid cooling system and outside air to minimise water use, while all fossil-fuel power consumed will be matched with carbon-free energy.

A new 250 MW solar farm is under construction in Portage County to support the commitment. The company has partnered with local organisations to restore prairie and wetland habitats, further embedding the project into the surrounding community.

Executives say the development represents more than just an investment in AI. It signals a long-term commitment to Wisconsin’s economy, education, and environment.

From broadband expansion to innovation labs, the company aims to ensure the benefits of AI extend to local businesses, students, and residents instead of remaining concentrated in global hubs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Xbox app introduces Microsoft’s AI Copilot in beta

Microsoft has launched the beta version of Copilot for Gaming, an AI-powered assistant within the Xbox mobile app for iOS and Android. The early rollout covers over 50 regions, including India, the US, Japan, Australia, and Singapore.

Access is limited to users aged 18 and above, and the assistant currently supports English instead of other languages, with broader language support expected in future updates.

Copilot for Gaming is a second-screen companion, allowing players to stay informed and receive guidance without interrupting console gameplay.

The AI can track game activity, offer context-aware responses, suggest new games based on play history, check achievements, and manage account details such as Game Pass renewal and gamer score.

Users can ask questions like ‘What was my last achievement in God of War Ragnarok?’ or ‘Recommend an adventure game based on my preferences.’

Microsoft plans to expand Copilot for Gaming beyond chat-based support into a full AI gaming coach. Future updates could provide real-time gameplay advice, voice interaction, and direct console integration, allowing tasks such as downloading or installing games remotely instead of manually managing them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and Google to block political ads in EU under new regulations

Broadcasters and advertisers seek clarity before the EU’s political advertising rules become fully applicable on 10 October. The European Commission has promised further guidance, but details on what qualifies as political advertising remain vague.

Meta and Google will block the EU’s political, election, and social issue ads when the rules take effect, citing operational challenges and legal uncertainty. The regulation, aimed at curbing disinformation and foreign interference, requires ads to display labels with sponsors, payments, and targeting.

Publishers fear they lack the technical means to comply or block non-compliant programmatic ads, risking legal exposure. They call for clear sponsor identification procedures, standardised declaration formats, and robust verification processes to ensure authenticity.

Advertisers warn that the rules’ broad definition of political actors may be hard to implement. At the same time, broadcasters fear issue-based campaigns – such as environmental awareness drives – could unintentionally fall under the scope of political advertising.

The Dutch parliamentary election on 29 October will be the first to take place under the fully applicable rules, making clarity from Brussels urgent for media and advertisers across the bloc.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tool combines breast cancer and heart disease screening

Scientists from Australian universities and The George Institute for Global Health have developed an AI tool that analyses mammograms and a woman’s age to predict her risk of heart-related hospitalisation or death within 10 years.

Published in Heart on 17 September, the study highlights the lack of routine heart disease screening for women, despite cardiovascular conditions causing 35% of female deaths. The tool delivers a two-in-one health check by integrating heart risk prediction into breast cancer screening.

The model was trained on data from over 49,000 women and performs as accurately as traditional models that require blood pressure and cholesterol data. Researchers emphasise its low-resource nature, making it viable for broad deployment in rural or underserved areas.

Study co-author Dr Jennifer Barraclough said mobile mammography services could adopt the tool to deliver breast cancer and heart health screenings in one visit. Such integration could help overcome healthcare access barriers in remote regions.

Next, before a broader rollout, the researchers plan to validate the tool in more diverse populations and study practical challenges, such as technical requirements and regulatory approvals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Amazon AI transforms seller experience

Amazon has unveiled a significant upgrade to its Seller Assistant, evolving the tool into an agentic AI-powered partner that can actively help sellers manage and grow their businesses.

Powered by Amazon Bedrock and using advanced models from Amazon Nova and Anthropic Claude, the AI can respond to queries and plan, reason, and act with a seller’s permission. Independent sellers now have an assistant operating around the clock while controlling them.

The upgraded AI can optimise inventory, monitor account health, and provide strategic guidance on product listings and compliance requirements.

Analysing historical trends alongside current data can suggest new product categories, forecast demand, and propose advertising strategies to improve performance. Sellers can receive actionable recommendations instead of manually reviewing reports, saving time and effort.

Creative Studio also benefits from agentic AI capabilities, enabling sellers to generate professional-quality advertising content in hours instead of weeks.

The AI evaluates products alongside Amazon’s shopping signals and produces tailored ad concepts with clear reasoning, helping sellers refine campaigns and boost engagement. Early users report faster decisions, better inventory management, and more efficient marketing.

Amazon plans to extend Seller Assistant to other countries in the coming months at no extra cost.

The evolution highlights the growing role of AI in everyday business operations. It reflects Amazon’s commitment to integrating advanced technologies into the seller experience instead of relying solely on human intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!