Google boosts AI and connectivity in Africa

Google has announced new investments to expand connectivity, AI access and skills training across Africa, aiming to accelerate youth-led innovation.

The company has already invested over $1 billion in digital infrastructure, including subsea cable projects such as Equiano and Umoja, enabling 100 million people to come online for the first time. Four new regional cable hubs are being established to boost connectivity and resilience further.

Alongside infrastructure, Google will provide college students in eight African countries with a free one-year subscription to Google AI Pro. The tools, including Gemini 2.5 Pro and Guided Learning, are designed to support research, coding, and problem-solving.

By 2030, Google says it intends to reach 500 million Africans with AI-powered innovations tackling issues such as crop resilience, flood forecasting and access to education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI forecasts help millions of Indian farmers

More than 38 million farmers in India have received AI-powered forecasts predicting the start of the monsoon season, helping them plan when to sow crops.

The forecasts, powered by NeuralGCM, a Google Research model, blend physics-based simulations with machine learning trained on decades of climate data.

Unlike traditional models requiring supercomputers, NeuralGCM can run on a laptop, making advanced AI weather predictions more accessible.

Research shows that accurate early forecasts can nearly double Indian farmers’ annual income by helping them decide when to plant, switch crops or hold back.

The initiative demonstrates how AI research can directly support communities vulnerable to climate shifts and improve resilience in agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Startups gain new tools on Google Cloud

Google Cloud says AI startups are increasingly turning to its technology stack, with more than 60% of global generative AI startups building on its infrastructure. Nine of the world’s top ten AI labs also rely on its cloud services.

To support this momentum, Google Cloud hosted its first AI Builders Forum in Silicon Valley, where hundreds of founders gathered to hear about new tools, infrastructure and programmes designed to accelerate innovation.

Google Cloud has also released a technical guide to help startups build and scale AI agents, including retrieval-augmented generation (RAG) and multimodal approaches. The guide highlights leveraging Google’s agentic development kit and agent-to-agent tools.

The support is bolstered by the Google for Startups Cloud Program, which offers credits worth up to $350,000, mentorship and access to partner AI models from Anthropic and Meta. Google says its goal is to give startups the technology and resources to launch, scale and compete globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Landmark tech deal secures record UK-US AI and energy investment

The UK and US have signed a landmark Tech Prosperity Deal, securing a £250 billion investment package across technology and energy sectors. The agreement includes major commitments from leading AI companies to expand data centres, supercomputing capacity, and create 15,000 jobs in Britain.

Energy security forms a core part of the deal, with plans for 12 advanced nuclear reactors in northeast England. These facilities are expected to generate power for millions of homes and businesses, lower bills, and strengthen bilateral energy resilience.

The package includes $30 billion from Microsoft and $6.8 billion from Google, alongside other AI investments aimed at boosting UK research. It also funds the country’s largest supercomputer project with Nscale, establishing a foundation for AI leadership in Europe.

American firms have pledged £150 billion for UK projects, while British companies will invest heavily in the US. Pharmaceutical giant GSK has committed nearly $30 billion to American operations, underlining the cross-Atlantic nature of the partnership.

The Tech Prosperity Deal follows a recent UK-US trade agreement that removes tariffs on steel and aluminium and opens markets for key exports. The new accord builds on that momentum, tying economic growth to innovation, deregulation, and frontier technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Intel to design custom CPUs as part of NVIDIA AI partnership

The two US tech firms, NVIDIA and Intel, have announced a major partnership to develop multiple generations of AI infrastructure and personal computing products.

They say that the collaboration will merge NVIDIA’s leadership in accelerated computing with Intel’s expertise in CPUs and advanced manufacturing.

For data centres, Intel will design custom x86 CPUs for NVIDIA, which will be integrated into the company’s AI platforms to power hyperscale and enterprise workloads.

In personal computing, Intel will create x86 system-on-chips that incorporate NVIDIA RTX GPU chiplets, aimed at delivering high-performance PCs for a wide range of consumers.

As part of the deal, NVIDIA will invest $5 billion in Intel common stock at $23.28 per share, pending regulatory approvals.

NVIDIA’s CEO Jensen Huang described the collaboration as a ‘fusion of two world-class platforms’ that will accelerate computing innovation, while Intel CEO Lip-Bu Tan said the partnership builds on decades of x86 innovation and will unlock breakthroughs across industries.

The move underscores how AI is reshaping both infrastructure and personal computing. By combining architectures and ecosystems instead of pursuing separate paths, Intel and NVIDIA are positioning themselves to shape the next era of computing at a global scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers for OpenAI and Apollo find scheming behaviour in frontier AI models

OpenAI and Apollo Research developed evaluations to reveal hidden misalignment in advanced AI systems, often labelled scheming, and observed scheming-like behaviour in several frontier models, including OpenAI o3, o4-mini, Gemini 2.5 pro and Claude Opus-4.

Researchers treated covert actions, such as deliberately withholding or distorting task-relevant information, as a measurable proxy and stress-tested models across dozens of scenarios.

Scheming occurs when a model outwardly appears aligned while pursuing an alternate goal. The teams used a trader analogy to explain the risk: a rule-breaking broker who covers tracks can seem more compliant than an honest operator who accepts lower gains.

They trained models with deliberative alignment to counter the behaviour, asking them to read and reason about an anti-scheming specification before acting. Hence, models follow safety principles instead of merely avoiding detection.

That training produced about a thirtyfold reduction in covert actions on out-of-distribution tests, with o3 falling from 13 percent to 0.4 percent and o4-mini dropping from 8.7 percent to 0.3 percent.

Nevertheless, rare but serious failures remained, and measurement proved complex. Models often became more aware of being evaluated, and situational awareness can suppress observable scheming without altering underlying aims.

The study relied heavily on access to the models’ chain of thought to probe motivations, so preserving reasoning transparency is crucial for reliable oversight. The teams warned that training practices eroding transparency could make monitoring far harder and let misalignment hide rather than vanish.

OpenAI and Apollo called for broader cross-lab safety evaluations, stronger monitoring tools and continued research into anti-scheming techniques. They renewed their partnership, launched a $500,000 red-teaming challenge focused on scheming and proposed shared testing protocols.

The researchers emphasised there is no evidence that today’s deployed AI models would abruptly begin harmful scheming. Still, the risk will grow as systems take on more ambiguous, long-term, real-world responsibilities instead of short, narrow tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds AI features to Chrome browser on Android and desktop

Alphabet’s Google has announced new AI-powered features for its Chrome browser that aim to make web browsing more proactive instead of reactive. The update centres on integrating Gemini, Google’s AI assistant, into Chrome to provide contextual support across tabs and tasks.

The AI assistant will help students and professionals manage large numbers of open tabs by summarising articles, answering questions, and recalling previously visited pages. It will also connect with Google services such as Docs and Calendar, offering smoother workflows on desktop and mobile devices.

Chrome’s address bar, the omnibox, is being upgraded with AI Mode. Users can ask multi-part questions and receive context-aware suggestions relevant to the page they are viewing. Initially available in the US, the feature will roll out to other regions and languages soon.

Beyond productivity, Google is also applying AI to security and convenience. Chrome now blocks billions of spam notifications daily, fills in login details, and warns users about malicious apps.

Future updates are expected to bring agentic capabilities, enabling Chrome to carry out complex tasks such as ordering groceries with minimal user input.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft builds the world’s most powerful AI data centre in Wisconsin

US tech giant, Microsoft, is completing the construction of Fairwater in Mount Pleasant, Wisconsin, which it says will be the world’s most powerful AI data centre. The facility is expected to be operational in early 2026 after a $3.3 billion investment, with an additional $4 billion now committed for a second site.

The company says the project will help shape the next generation of AI by training frontier models with hundreds of thousands of NVIDIA GPUs, offering ten times the performance of today’s fastest supercomputers.

Beyond technology, Microsoft is highlighting the impact on local jobs and skills. Thousands of construction workers have been employed during the build, while the site is expected to support around 500 full-time roles when the first phase opens, rising to 800 once the second is complete.

The US giant has also launched Wisconsin’s first Datacentre Academy with Gateway Technical College to prepare students for careers in the digital economy.

Microsoft is also stressing its sustainability measures. The data centre will rely on a closed-loop liquid cooling system and outside air to minimise water use, while all fossil-fuel power consumed will be matched with carbon-free energy.

A new 250 MW solar farm is under construction in Portage County to support the commitment. The company has partnered with local organisations to restore prairie and wetland habitats, further embedding the project into the surrounding community.

Executives say the development represents more than just an investment in AI. It signals a long-term commitment to Wisconsin’s economy, education, and environment.

From broadband expansion to innovation labs, the company aims to ensure the benefits of AI extend to local businesses, students, and residents instead of remaining concentrated in global hubs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Xbox app introduces Microsoft’s AI Copilot in beta

Microsoft has launched the beta version of Copilot for Gaming, an AI-powered assistant within the Xbox mobile app for iOS and Android. The early rollout covers over 50 regions, including India, the US, Japan, Australia, and Singapore.

Access is limited to users aged 18 and above, and the assistant currently supports English instead of other languages, with broader language support expected in future updates.

Copilot for Gaming is a second-screen companion, allowing players to stay informed and receive guidance without interrupting console gameplay.

The AI can track game activity, offer context-aware responses, suggest new games based on play history, check achievements, and manage account details such as Game Pass renewal and gamer score.

Users can ask questions like ‘What was my last achievement in God of War Ragnarok?’ or ‘Recommend an adventure game based on my preferences.’

Microsoft plans to expand Copilot for Gaming beyond chat-based support into a full AI gaming coach. Future updates could provide real-time gameplay advice, voice interaction, and direct console integration, allowing tasks such as downloading or installing games remotely instead of manually managing them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Weekly #230 Nepal’s Discord democracy: How a banned platform became a ballot box

 Logo, Text

12 – 19 September 2025


HIGHLIGHT OF THE WEEK

In a historic first for democracy, a country has chosen its interim prime minister via a messaging app. 

In early September, Nepal was thrown into turmoil after the government abruptly banned 26 social media platforms, including Facebook, YouTube, X, and Discord, citing failure to comply with registration rules. The move sparked outrage, particularly among the country’s Gen Z, who poured into the streets, accusing officials of corruption. The protests quickly turned deadly.

Within days, the ban was lifted. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner

Karki was sworn in the same evening. On her recommendation, the President has dissolved parliament, and new elections have been scheduled for 5 March 2026, after which Karki will step down.

 Adult, Male, Man, Person, Art, Graphics, Face, Head, People

However temporary or symbolic, the episode underscored how digital platforms can become political arenas when traditional ones falter. When official institutions lose legitimacy, people will instinctively repurpose the tools at their disposal to build new ones. 


IN OTHER NEWS THIS WEEK

TikTok ban deadline extended to December 2025 as sale negotiations continue

The TikTok saga entered what many see as yet another act in a long-running drama. In early 2024, the US Congress, citing national security risks, passed a law demanding that ByteDance, TikTok’s Chinese parent company, divest control of the app or face a ban in the USA. The law, which had bipartisan support in Congress, was later upheld by the Supreme Court.

A refresher. The US government has long argued that the app’s access to US user data poses significant risks. Why? TikTok is a subsidiary of ByteDance, which is a private Chinese company, possibly subject to the Chinese 2017 National Intelligence Law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China. On the other hand, TikTok and ByteDance maintain that TikTok operates independently and respects user privacy.

However, the administration under President Trump has been repeatedly postponing enforcement via executive orders.

Economic and trade negotiations with China have been central to the delay. As the fourth round of talks in Madrid coincided with the latest deadline, Trump opted to extend the deadline again — this time until 16 December 2025 — giving TikTok more breathing room. 

The talks in Madrid have revolved around a potential ‘framework deal’ under which TikTok would be sold or restructured in a way that appeases US concerns, but retain certain ‘Chinese characteristics.’

What do officials say is in the deal? 

  • TikTok’s algorithm: According to Wang Jingtao, deputy director of China’s Central Cyberspace Affairs Commission, there was consensus on authorisation of ‘the use of intellectual property rights such as (TikTok’s) algorithm’ — a main sticking point in the deal.
  • US user data: According to Wang Jingtao, the sides also agreed on entrusting a partner with handling US user data and content security.

What else is reported to be in the deal?

  • A new recommendation algorithm licensed from TikTok parent ByteDance
  • Creating a new company to run TikTok’s US operations and/or creating a new app for US users to move to
  • A consortium of US investors, including Oracle, Silver Lake, and Andreessen Horowitz, would own 80% of the business, with 20% held by Chinese shareholders.
  • The new company’s board would be mostly American, including one member appointed by the US government.

Trump himself stated that he will speak with Chinese President Xi Jinping on Friday to possibly finalise the deal.

If finalised, this deal could establish a new template for how nations manage foreign technology platforms deemed critical to national security.


China’s counterpunch in the chip war

While TikTok grabs headlines as the most visible symbol of the USA–China digital rivalry, the more consequential battle may be unfolding in the semiconductor sector. Just as Washington extends the deadline for TikTok’s divestiture, Beijing has opened a new line of attack: an anti-dumping probe into US analogue chips.  

Announced by China’s Ministry of Commerce, the probe accuses US firms of ‘lowering and suppressing’ prices in ways that hurt domestic producers. It covers legacy chips built on older 40nm-plus process nodes — not the cutting-edge AI accelerators that dominate geopolitical debates, but the everyday workhorse components that power smart appliances, industrial equipment, and automobiles. These mature nodes account for a massive share of China’s consumption, with US firms supplying more than 40% of the market in recent years.

For China’s domestic industry, the probe is an opportunity. Analysts say it could force foreign suppliers to cede market share to local firms concentrated in Jiangsu and other industrial provinces. At the same time, there are reports that China is asking tech companies to stop purchasing Nvidia’s most powerful processors. And speaking of Nvidia, the company is in the crosshairs again, as China’s State Administration for Market Regulation (SAMR) issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. Depending on the outcome of the investigation, Nvidia could face penalties.

Meanwhile, Washington is tightening its own grip. The USA will require annual license renewals for South Korean firms Samsung and SK Hynix to supply advanced chips to Chinese factories — a reminder that even America’s allies are caught in the middle. 

Last month, the US government acquired a 10% stake in Intel. This week, Nvidia announced a $5 billion investment in Intel to co-develop custom chips with the company. Together, these moves reflect Washington’s broader push to reinforce semiconductor leadership amid competition from China.


UK and USA sign Tech Prosperity Deal

The USA and the UK have signed a Technology Prosperity Deal to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation.

On AI, the deal expands joint research programs, compute access, and datasets in areas like biotechnology, precision medicine, fusion, and space. It also aligns policies, strengthens standards, and deepens ties between the UK AI Security Institute and the US Center for AI Standards and Innovation to promote secure adoption.

On quantum, the countries will establish a benchmarking task force, launch a Quantum Code Challenge to mobilise researchers, and harness AI and high-performance computing to accelerate algorithm development and system readiness. A US-UK Quantum Industry Exchange Program will spur adoption across defence, health, finance, and energy.

The agreement also reinforces foundations for innovation, including research security, 6G development, resilient telecoms and navigation systems, and mobilising private capital for critical technologies.

The deal was signed during a state visit by President Trump to the UK. Also present: OpenAI’s Sam Altman, Nvidia’s Jensen Huang, Microsoft’s Satya Nadella, and Apple’s Tim Cook. 

Microsoft pledged $30bn over four years in the UK, its largest-ever UK commitment. Half will go into capital expenditure for AI and cloud datacentres, the rest into operations like research and sales. 

Nscale, OpenAI and Nvidia will develop a platform that will deploy OpenAI’s technology in the UK. Nvidia will channel £11bn in value into UK AI projects by supplying up to 120,000 Blackwell GPUs, data centre builds, and supercomputers. It is also directly investing £500m in Nscale. 

‘This is the week that I declare the UK will be an AI superpower’, Jensen Huang told BBC News

Missing from the deal? The UK’s Digital Services Tax (DST), which remains set at 2% and was previously reported to be part of the negotiations, along with copyright issues linked to AI training.


The digital playground gets a fence and a curfew

In response to rising concerns over the impact of AI and social media on teenagers, governments and tech companies are implementing new measures to enhance online safety for young users.

Australia has released its regulatory guidance for the incoming nationwide ban on social media access for children under 16, effective 10 December 2025. The legislation requires platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms must detect and remove underage accounts, communicating clearly with affected users. Platforms are also expected to block attempts to re-register. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

French lawmakers are proposing stricter regulations on teen social media use, including mandatory nighttime curfews. A parliamentary report suggests that social media accounts for 15- to 18-year-olds should be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues. This proposal follows concerns about the psychological impact of platforms like TikTok on minors. 

In the USA, the Federal Trade Commission (FTC) has launched an investigation into the safety of AI chatbots, focusing on their impact on children and teenagers. Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships. Not long after, grieving parents have testified before the US Congress, urging lawmakers to regulate AI chatbots after their children died by suicide or self-harmed following interactions with these tools. 

OpenAI has introduced a specialised version of ChatGPT tailored for teenagers, incorporating age-prediction technology to restrict access to the standard version for users under 18. Where uncertainty exists, it will assume the user is a teenager. If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities. This initiative aims to address growing concerns about the mental health risks associated with AI chatbots, while also raising concerns related to issues such as privacy and freedom of expression. 

The intentions are largely good, but a patchwork of bans, curfews, and algorithmic surveillance just underscores that the path forward is unclear. Meanwhile, the kids are almost certainly already finding the loopholes.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow.

The Human Rights Council

The Human Rights Council discussed a report on the human rights implications of new and emerging technologies in the military domain on 18 September. Prepared by the Human Rights Council Advisory Committee, the report recommends, among other measures, that ‘states and international organizations should consider adopting binding or other effective measures to ensure that new and emerging technologies in the military domain whose design, development or use pose significant risks of misuse, abuse or irreversible harm – particularly where such risks may result in human rights violations – are not developed, deployed or used’.

WTO Public Forum 2025

WTO’s largest outreach event, the WTO Public Forum, took place from 17 to 18 September under the Theme ‘Enhance, Create and Preserve’. Digital issues were high on the agenda this year, with sessions dedicated to AI and trade, digital resilience, the moratorium on customs duties on electronic transmissions, and e-commerce negotiations, for example. Other issues were also salient, such as the uncertainty created by rising tariffs and the need for WTO reform. During the Forum, the WTO launched the 2025 World Trade Report, under the title ‘Making trade and AI work together to the benefit of all’. The report explores AI’s potential to boost global trade, particularly through digitally deliverable services. It argues that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries, but warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.

CSTD WG on data governance

The third meeting of the UN CSTD on data governance (WGDG) took place on 15-16 September. The focus of this meeting was on the work being carried out in the four working tracks of the WGDG: 1. principles of data governance at all levels; 2. interoperability between national, regional and international data systems; 3. considerations of sharing the benefits of data; 4. facilitation of safe, secure and trusted data flows, including cross border data flows.

WGDG members reviewed the synthesis reports produced by the CSTD Secretariat, based on the responses to questionnaires proposed by the co-facilitators of working tracks. The WGDG decided to postpone the deadline for contributions to 7 October. More information can be found in the ‘call for contributions’ on the website of the WGDG.


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next two weeks at the UN will be packed with high-level discussions on advancing digital cooperation and AI governance. 

The general debate, from 23 to 29 September, will gather heads of state, ministers, and global leaders to tackle pressing challenges—climate change, sustainable development, and international peace—under the theme ‘Better together: 80 years and more for peace, development and human rights.’ Diplo and the Geneva Internet Platform will track digital and AI-related discussions using a hybrid of expert analysis and AI tools, so be sure to bookmark our dedicated web page.

On 22 September, the UN Office for Digital and Emerging Technologies (ODET) will host Digital Cooperation Day, marking the first anniversary of the Global Digital Compact. Leaders from government, the private sector, civil society, and academia will explore inclusive digital economies, AI governance, and digital public infrastructure through panels, roundtables, and launches.

On 23 September, ITU and UNDP will host Digital@UNGA 2025: Digital for Good – For People and Prosperity at UN Headquarters. The anchor event will feature high-level discussions on digital inclusion, trust, rights, and equity, alongside showcases of initiatives such as the AI Hub for Sustainable Development. Complementing this gathering, affiliate sessions throughout the week will explore future internet governance, AI for the SDGs, digital identity, green infrastructure in Africa, online trust in the age of AI, climate early-warning systems, digital trade, and space-based connectivity. 

A major highlight will be the launch of the Global Dialogue on AI Governance on 25 September. Set to have its first meeting in 2026 along with the AI for Good Summit in Geneva, the dialogue’s main task – as decided by the UN General Assembly – is to facilitate open, transparent and inclusive discussions on AI governance.



READING CORNER
Origins of AI 1

Ever wonder how AI really works? Discover its journey from biological neurons to deep learning and the breakthrough paper that transformed modern artificial intelligence.

ai hallucinations cover chatgpt

Hallucinations in AI can look like facts. Learn how flawed incentives and vague prompts create dangerous illusions.