UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe prepares formal call for AI Gigafactory projects

The European Commission is collaborating with the EU capitals to narrow the list of proposals for large AI training hubs, known as AI Gigafactories. The €20 billion plan will be funded by the Commission (17%), the EU countries (17%), and industry (66%) to boost computing capacity for European developers.

The first call drew 76 proposals from 16 countries, far exceeding the initially planned four or five facilities. Most submissions must be merged or dropped, with Poland already seeking a joint bid with the Baltic states as talks continue.

Some EU members will inevitably lose out, with Ursula von der Leyen, the President of the European Commission, hinting that priority could be given to countries already hosting AI Factories. That could benefit Finland, whose Lumi supercomputer is part of a Nokia-led bid to scale up into a Gigafactory.

The plan has raised concerns that Europe’s efforts come too late, as US tech giants invest heavily in larger AI hubs. Still, Brussels hopes its initiative will allow EU developers to compete globally while maintaining control over critical AI infrastructure.

A formal call for proposals is expected by the end of the year, once the legal framework is finalised. Selection criteria and funding conditions will be set to launch construction as early as 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle to oversee TikTok algorithm in US deal

The White House has confirmed that TikTok’s prized algorithm will be managed in the US under Oracle’s supervision as part of a deal to place the app’s US operations under majority American ownership. The agreement would transfer control of TikTok’s US business, along with a copy of the algorithm, to a new joint venture run by a board dominated by American investors.

The confirmed participants are Oracle and private equity firm Silver Lake, with Fox Corp. also expected to join the group. President Donald Trump has suggested that high-profile figures such as Michael Dell, Rupert, and Lachlan Murdoch could be involved, though CNN sources say that the Murdochs personally will not invest. ByteDance will keep a stake of less than 20% in the new US entity.

The deal follows years of negotiations over concerns that TikTok’s Chinese parent company could be pressured to manipulate the platform for political influence. By law, ByteDance is barred from cooperating on the algorithm with any new American owners. The code will be reviewed, retrained on US user data to address these fears, and monitored by Oracle to ensure its independence.

President Trump is expected to sign an executive order later this week certifying that the deal meets national security requirements under last year’s ‘ban-or-sale’ law. He will also extend the pause on enforcement by 120 days, giving Washington and Beijing time to finalise regulatory approvals. The White House said the deal could be signed within days, with completion likely early next year.

The arrangement deepens Oracle’s role in managing TikTok’s American presence, building on its existing partnership to store US user data. The development coincided with Oracle announcing a leadership shake-up, with CEO Safra Catz stepping down to become vice chair and two co-CEOs taking over. It is unclear if the timing is connected, but Catz, a close Trump ally, could take a role in the TikTok venture.

While financial details remain uncertain, the White House has ruled out taking a direct stake in the company. The deal, valued in the billions, would conclude a years-long effort to bring TikTok under US oversight and resolve national security concerns tied to its Chinese ownership.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China cracks down on Kuaishou and Weibo over alleged online content violations

China’s internet watchdog, the Cyberspace Administration of China (CAC), has warned online platforms Kuaishou Technology and Weibo for failing to curb celebrity gossip and harmful content on their platforms.

The CAC issued formal warnings, citing damage to the ‘online ecosystem’ and demanding corrective action. Both firms pledged compliance, with Kuaishou forming a task force and Weibo promising self-reflection.

The move follows similar disciplinary action against lifestyle app RedNote and is part of a broader two-month campaign targeting content that ‘viciously stimulates negative emotions.’

Separately, Kuaishou is under investigation by the State Administration for Market Regulation for alleged malpractice in live-streaming e-commerce.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok nears US takeover deal as Washington secures control

The White House has revealed that US companies will take control of TikTok’s algorithm, with Americans occupying six of seven board seats overseeing the platform’s operations in the country. A final deal, which would reshape the app’s US presence, is expected soon, though Beijing has yet to respond publicly.

Washington has long pushed to separate TikTok’s American operations from its Chinese parent company, ByteDance, citing national security risks. The app faced repeated threats of a ban unless sold to US investors, with deadlines extended several times under President Donald Trump. The Supreme Court also upheld legislation requiring ByteDance to divest, though enforcement was delayed earlier this year.

According to the White House, data protection and privacy for American users will be managed by Oracle, chaired by Larry Ellison, a close Trump ally. Oracle will also oversee control of TikTok’s algorithm, the key technology that drives what users see on the app. Ellison’s influence in tech and media has grown, especially after his son acquired Paramount, which owns CBS News.

Trump claimed he had secured an understanding on the deal in a recent call with Chinese President Xi Jinping, describing the exchange as ‘productive.’ However, Beijing’s official response has been less explicit. The Commerce Ministry said discussions should proceed according to market rules and Chinese law, while state media suggested China welcomed continued negotiations.

Trump has avoided clarifying whether US investors need to develop a new system or continue using the existing one. His stance on TikTok has shifted since his first term, when he pushed for a ban, to now embracing the platform as a political tool to engage younger voters during his 2024 campaign.

Concerns over TikTok’s handling of user data remain at the heart of US objections. Officials at the Justice Department have warned that the app’s access to US data posed a security threat of ‘immense depth and scale,’ underscoring why Washington is pressing to lock down control of its operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

JFTC study and MSCA shape Japan’s AI oversight strategy

Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.

The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.

The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.

The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.

With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS calls for strong safeguards in EU-US border data-sharing agreement

On 17 September 2025, the European Data Protection Supervisor (EDPS) issued an Opinion on the EU-US negotiating mandate for a framework agreement on exchanging information for security screenings and identity verifications. The European Commission’s Recommendation aims to establish legal conditions for sharing data between the EU member states and the USA, enabling bilateral agreements tied to the US Visa Waiver Program’s Enhanced Border Security Partnership.

EDPS Wojciech Wiewiórowski emphasised the need to balance border security with fundamental rights, warning that sharing personal and biometric data could interfere with privacy. The agreement, a first for large-scale data sharing with a third country, must strictly limit data processing to what is necessary and proportionate.

The EDPS recommended narrowing the scope of shared data, excluding transfers from sensitive EU systems related to migration and asylum, and called for robust accountability, transparency, and judicial redress mechanisms accessible to all individuals, regardless of nationality.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Landmark tech deal secures record UK-US AI and energy investment

The UK and US have signed a landmark Tech Prosperity Deal, securing a £250 billion investment package across technology and energy sectors. The agreement includes major commitments from leading AI companies to expand data centres, supercomputing capacity, and create 15,000 jobs in Britain.

Energy security forms a core part of the deal, with plans for 12 advanced nuclear reactors in northeast England. These facilities are expected to generate power for millions of homes and businesses, lower bills, and strengthen bilateral energy resilience.

The package includes $30 billion from Microsoft and $6.8 billion from Google, alongside other AI investments aimed at boosting UK research. It also funds the country’s largest supercomputer project with Nscale, establishing a foundation for AI leadership in Europe.

American firms have pledged £150 billion for UK projects, while British companies will invest heavily in the US. Pharmaceutical giant GSK has committed nearly $30 billion to American operations, underlining the cross-Atlantic nature of the partnership.

The Tech Prosperity Deal follows a recent UK-US trade agreement that removes tariffs on steel and aluminium and opens markets for key exports. The new accord builds on that momentum, tying economic growth to innovation, deregulation, and frontier technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Techno(demo)cracy in action: How a five-day app blackout lit a Gen Z online movement in Nepal

Over the past two weeks, Nepal’s government has sought the right decision to regulate online space. The brought decision prompted a large, youth-led response. A government’s order issued on 4 September blocked access to 26 social platforms, from Facebook, Instagram and YouTube to X and WhatsApp, after the companies failed to register locally under Nepal’s rules for digital services. Within the next five days, authorities lifted the ban, but it was too late: tens of thousands of mostly young Nepalis, organized with VPNs, alternative chat apps and gaming-era coordination tools, forced a political reckoning that culminated in the burning of parts of the parliament complex, the resignation of Prime Minister K.P. Sharma Oli on 9 September, and the appointment of former chief justice Sushila Karki to lead an interim administration.

The social media ban, the backlash, the reversal, and the political break sequence have narrated an unexpected digital governance tale. The on-the-ground reality: a clash between a fast-evolving regulatory push and a hyper-networked youth cohort that treats connectivity as livelihood, classroom, and public square.

The trigger: A registration ultimatum meets a hyper-online society

The ban didn’t arrive from nowhere. Nepal has been building toward platform licensing since late 2023, when the government issued the Social Media Management Directive 2080 requiring platforms to register with the Ministry of Communication and Information Technology (MoCIT), designate a local contact, and comply with expedited takedown and cooperation rules. In early 2025, the government tabled a draft Social Media Bill 2081 in the National Assembly to convert that directive into an effective statute. International legal reviews, including UNESCO-supported March 2025 assessment and an analysis, praised the goal of accountability but warned that vague definitions, sweeping content-removal powers and weak independence could chill lawful speech.

Against that backdrop, the cabinet and courts have put into practice the bill draft. On 28 August 2025, authorities gave major platforms seven days to register with the Ministry of Communication and Information Technology (MoCIT); on 4 September, the telecom regulator moved to block unregistered services. Nepal’s government listed the 26 services covered by the order (including Facebook, Instagram, X, WhatsApp, YouTube, Reddit, Snapchat and others), while TikTok, Viber, Witk, Nimbuzz and Popo Live had registered and were allowed to operate. Two more (Telegram and Global Diary) were under review.

Why did the order provoke such a strong reaction? Considering the baseline, Nepal had about 14.3 million social-media user identities at the start of 2025, roughly 48% of the population, and internet use around 56%. A society in which half the country’s people (and a significantly larger share of its urban youth) rely on social apps for news, school, side-hustles, remittances and family ties is a society in which platform switches are not merely lifestyle choices; they’re digital infrastructure, and it is important to stress the ‘generation gap’ to understand this.

The movement: Gen Z logistics in a blackout world

What made Nepal’s youth mobilisation unusual wasn’t only its size and adaptability, but also the speed and digital literacy with which organisers navigated today’s digital infrastructure; skills that may be less familiar to people who don’t use these platforms daily. However, once the ban hit, the digitally literate rapidly diversified their strategies:

The logistics looked like distributed operations: a core group tasked with sourcing legal and medical aid; volunteer cartographers maintaining live maps of barricades; diaspora Nepalis mirroring clips to international audiences; and moderators trying (often failing) to keep chatrooms free of calls to violence.

 Chart, Plot, Map, Atlas, Diagram

The law: What Nepal is trying to regulate and why it backfired?

The draft Social Media Bill 2081 and the 2023 Directive share a broad structure:

  • Mandatory registration with MoCIT and local point-of-contact;
  • Expedited removal of content deemed ‘unlawful’ or ‘harmful’;
  • Data cooperation requirements with domestic authorities;
  • Penalties for non-compliance and user-level offences include phishing, impersonation and deepfake distribution.

Critics and the youth movement found that friction was not caused by the idea of regulation itself, but by how it was drafted and applied. UNESCO-supported March 2025 assessment and an analysis of the Social Media Bill 2081 flagged vague, catch-all definitions (e.g. ‘disrupts social harmony’), weak due process around takedown orders, and a lack of independent oversight, urging a tiered, risk-based approach that distinguishes between a global platform and a small local forum, and builds in judicial review and appeals. The Centre for Law and Democracy (CLD) analysis warned that focusing policy ‘almost exclusively on individual pieces of content’ instead of systemic risk management would produce overbroad censorship tools without solving the harms regulators worry about.

Regarding penalties, public discussion compared platform fines with user-level sanctions and general cybercrime provisions. Available news info suggests proposed platform-side fines up to roughly USD 17,000 (EUR 15,000) for operating without authorisation, while user-level offences (e.g. phishing, deepfakes, certain categories of misinformation) carry fines up to USD 2,000–3,500 and potential jail terms depending on the offence. 

The demographics: Who showed up, and why them?

Labelling the event a ‘Gen Z uprising’ is broadly accurate, and numbers help frame it. People aged 15–24 make up about one-fifth of Nepal’s population (page 56), and adding 25–29 pushes the 15–29 bracket to roughly a third, close to the share commonly captured by ‘Gen Z’ definitions used in this case (born 1997–2012, so 13–28 in 2025). Those will most likely be online daily, trading on TikTok, Instagram, and Facebook Marketplace, freelancing across borders, preparing for exams with YouTube and Telegram notes, and maintaining relationships across labour migration splits via WhatsApp and Viber. When those rails go down, they feel it first and hardest.

There’s also the matter of expectations. A decade of smartphone diffusion trained Nepali youth to assume the availability of news, payments, learning, work, and diaspora connections, but the ban punctured that assumption. In interviews and livestreams, student voices toggled between free-speech language and bread-and-butter complaints (lost orders, cancelled tutoring, a frozen online store, a blocked interview with an overseas client).

The platforms: two weeks of reputational whiplash

 Person, Art, Graphics, Clothing, Footwear, Shoe, Book, Comics, Publication

The economy and institutions: Damage, then restraint

The five-day blackout blew holes in ordinary commerce: sellers lost a festival week of orders, creators watched brand deals collapse, and freelancers missed interviews. The violence that followed destroyed far more: Gen Z uprising leaves roughly USD 280 million / EUR 240 million in damages, estimates circulating in the aftermath.

On 9 September, the government lifted the platform restrictions; on 13 September, the news chronicled a re-opening capital under interim PM Karki, who spent her first days visiting hospitals and signalling commitments to elections and legal review. What followed mattered: the ban acknowledged, and the task to ensure accountability was left. Here, the event gave legislators the chance to go back to the bill’s text with international guidance on the table and for leaders to translate street momentum into institutional questions.

Bottom line

Overall, Nepal’s last two weeks were not a referendum on whether social platforms should face rules. They were a referendum on how those rules are made and enforced in a society where connectivity is a lifeline and the connected are young. A government sought accountability by unplugging the public square and the public, Gen Z, mostly, responded by building new squares in hours and then spilling into the real one. The costs are plain and human, from the hospital wards to the charred chambers of parliament. The opportunity is also plain: to rebuild digital law so that rights and accountability reinforce rather than erase each other.

If that happens, the ‘Gen Z revolution’ of early September will not be a story about apps. It will be about institutions catching up to the internet, digital policies and a generation insisting they be invited to write the new social contract for digital times, which ensures accountability, transparency, judicial oversight and due process.

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!