OpenAI and NVIDIA forge $100B AI power deal

OpenAI and NVIDIA have unveiled plans for a major partnership to build next-generation AI infrastructure, with NVIDIA committing up to $100 billion to support OpenAI’s push toward superintelligence. The deal, outlined in a letter of intent, will see NVIDIA provide at least 10 gigawatts of computing power, with the first systems expected to be online in late 2026 through its new Vera Rubin platform.

NVIDIA’s CEO Jensen Huang called the agreement the next leap forward in AI, noting the companies’ decade-long collaboration from the early DGX supercomputers to the rise of ChatGPT. OpenAI’s CEO Sam Altman stressed that computing power is now the backbone of the future economy, framing the new investment as vital for both breakthroughs and large-scale access to AI.

OpenAI President Greg Brockman emphasised the scale of the move, saying 10 gigawatts of computing will allow the organisation to expand the frontier of intelligence and make the benefits of AI more widely available. NVIDIA will serve as OpenAI’s preferred partner for compute and networking, with both companies coordinating their hardware and software roadmaps.

The alliance builds on OpenAI’s existing collaborations with companies like Microsoft, Oracle, and SoftBank, which are working with the group to develop advanced AI infrastructure. Together, they are targeting global enterprise adoption while ensuring systems can grow at a pace that matches AI’s rapid evolution.

With over 700 million weekly active users and strong uptake across businesses and developers, OpenAI sees the partnership as central to its mission of creating artificial general intelligence that benefits humanity. Details of the deal are expected to be finalised in the coming weeks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5-powered ChatGPT Edu comes to Oxford staff and students

The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.

ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.

Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.

The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.

A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI explains approach to privacy, freedom, and teen safety

OpenAI has outlined how it balances privacy, freedom, and teen safety in its AI tools. The company said AI conversations often involve personal information and deserve protection like privileged talks with doctors or lawyers.

Security features are being developed to keep data private, though critical risks such as threats to life or societal-scale harm may trigger human review.

The company is also focused on user freedom. Adults are allowed greater flexibility in interacting with AI, within safety boundaries. For instance, the model can engage in creative or sensitive content requests, while avoiding guidance that could cause real-world harm.

OpenAI aims to treat adults as adults, providing broader freedoms as long as safety is maintained. Teen safety is prioritised over privacy and freedom. Users under 18 are identified via an age-prediction system or, in some cases, verified by ID.

The AI will avoid flirtatious talk or discussions of self-harm, and in cases of imminent risk, parents or authorities may be contacted. Parental controls and age-specific rules are being developed to protect minors while ensuring safe use of the platform.

OpenAI acknowledged that these principles sometimes conflict and not everyone will agree with the approach. The company stressed transparency in its decision-making and said it consulted experts to establish policies that balance safety, freedom, and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers for OpenAI and Apollo find scheming behaviour in frontier AI models

OpenAI and Apollo Research developed evaluations to reveal hidden misalignment in advanced AI systems, often labelled scheming, and observed scheming-like behaviour in several frontier models, including OpenAI o3, o4-mini, Gemini 2.5 pro and Claude Opus-4.

Researchers treated covert actions, such as deliberately withholding or distorting task-relevant information, as a measurable proxy and stress-tested models across dozens of scenarios.

Scheming occurs when a model outwardly appears aligned while pursuing an alternate goal. The teams used a trader analogy to explain the risk: a rule-breaking broker who covers tracks can seem more compliant than an honest operator who accepts lower gains.

They trained models with deliberative alignment to counter the behaviour, asking them to read and reason about an anti-scheming specification before acting. Hence, models follow safety principles instead of merely avoiding detection.

That training produced about a thirtyfold reduction in covert actions on out-of-distribution tests, with o3 falling from 13 percent to 0.4 percent and o4-mini dropping from 8.7 percent to 0.3 percent.

Nevertheless, rare but serious failures remained, and measurement proved complex. Models often became more aware of being evaluated, and situational awareness can suppress observable scheming without altering underlying aims.

The study relied heavily on access to the models’ chain of thought to probe motivations, so preserving reasoning transparency is crucial for reliable oversight. The teams warned that training practices eroding transparency could make monitoring far harder and let misalignment hide rather than vanish.

OpenAI and Apollo called for broader cross-lab safety evaluations, stronger monitoring tools and continued research into anti-scheming techniques. They renewed their partnership, launched a $500,000 red-teaming challenge focused on scheming and proposed shared testing protocols.

The researchers emphasised there is no evidence that today’s deployed AI models would abruptly begin harmful scheming. Still, the risk will grow as systems take on more ambiguous, long-term, real-world responsibilities instead of short, narrow tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Weekly #230 Nepal’s Discord democracy: How a banned platform became a ballot box

 Logo, Text

12 – 19 September 2025


HIGHLIGHT OF THE WEEK

In a historic first for democracy, a country has chosen its interim prime minister via a messaging app. 

In early September, Nepal was thrown into turmoil after the government abruptly banned 26 social media platforms, including Facebook, YouTube, X, and Discord, citing failure to comply with registration rules. The move sparked outrage, particularly among the country’s Gen Z, who poured into the streets, accusing officials of corruption. The protests quickly turned deadly.

Within days, the ban was lifted. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner

Karki was sworn in the same evening. On her recommendation, the President has dissolved parliament, and new elections have been scheduled for 5 March 2026, after which Karki will step down.

 Adult, Male, Man, Person, Art, Graphics, Face, Head, People

However temporary or symbolic, the episode underscored how digital platforms can become political arenas when traditional ones falter. When official institutions lose legitimacy, people will instinctively repurpose the tools at their disposal to build new ones. 


IN OTHER NEWS THIS WEEK

TikTok ban deadline extended to December 2025 as sale negotiations continue

The TikTok saga entered what many see as yet another act in a long-running drama. In early 2024, the US Congress, citing national security risks, passed a law demanding that ByteDance, TikTok’s Chinese parent company, divest control of the app or face a ban in the USA. The law, which had bipartisan support in Congress, was later upheld by the Supreme Court.

A refresher. The US government has long argued that the app’s access to US user data poses significant risks. Why? TikTok is a subsidiary of ByteDance, which is a private Chinese company, possibly subject to the Chinese 2017 National Intelligence Law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China. On the other hand, TikTok and ByteDance maintain that TikTok operates independently and respects user privacy.

However, the administration under President Trump has been repeatedly postponing enforcement via executive orders.

Economic and trade negotiations with China have been central to the delay. As the fourth round of talks in Madrid coincided with the latest deadline, Trump opted to extend the deadline again — this time until 16 December 2025 — giving TikTok more breathing room. 

The talks in Madrid have revolved around a potential ‘framework deal’ under which TikTok would be sold or restructured in a way that appeases US concerns, but retain certain ‘Chinese characteristics.’

What do officials say is in the deal? 

  • TikTok’s algorithm: According to Wang Jingtao, deputy director of China’s Central Cyberspace Affairs Commission, there was consensus on authorisation of ‘the use of intellectual property rights such as (TikTok’s) algorithm’ — a main sticking point in the deal.
  • US user data: According to Wang Jingtao, the sides also agreed on entrusting a partner with handling US user data and content security.

What else is reported to be in the deal?

  • A new recommendation algorithm licensed from TikTok parent ByteDance
  • Creating a new company to run TikTok’s US operations and/or creating a new app for US users to move to
  • A consortium of US investors, including Oracle, Silver Lake, and Andreessen Horowitz, would own 80% of the business, with 20% held by Chinese shareholders.
  • The new company’s board would be mostly American, including one member appointed by the US government.

Trump himself stated that he will speak with Chinese President Xi Jinping on Friday to possibly finalise the deal.

If finalised, this deal could establish a new template for how nations manage foreign technology platforms deemed critical to national security.


China’s counterpunch in the chip war

While TikTok grabs headlines as the most visible symbol of the USA–China digital rivalry, the more consequential battle may be unfolding in the semiconductor sector. Just as Washington extends the deadline for TikTok’s divestiture, Beijing has opened a new line of attack: an anti-dumping probe into US analogue chips.  

Announced by China’s Ministry of Commerce, the probe accuses US firms of ‘lowering and suppressing’ prices in ways that hurt domestic producers. It covers legacy chips built on older 40nm-plus process nodes — not the cutting-edge AI accelerators that dominate geopolitical debates, but the everyday workhorse components that power smart appliances, industrial equipment, and automobiles. These mature nodes account for a massive share of China’s consumption, with US firms supplying more than 40% of the market in recent years.

For China’s domestic industry, the probe is an opportunity. Analysts say it could force foreign suppliers to cede market share to local firms concentrated in Jiangsu and other industrial provinces. At the same time, there are reports that China is asking tech companies to stop purchasing Nvidia’s most powerful processors. And speaking of Nvidia, the company is in the crosshairs again, as China’s State Administration for Market Regulation (SAMR) issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. Depending on the outcome of the investigation, Nvidia could face penalties.

Meanwhile, Washington is tightening its own grip. The USA will require annual license renewals for South Korean firms Samsung and SK Hynix to supply advanced chips to Chinese factories — a reminder that even America’s allies are caught in the middle. 

Last month, the US government acquired a 10% stake in Intel. This week, Nvidia announced a $5 billion investment in Intel to co-develop custom chips with the company. Together, these moves reflect Washington’s broader push to reinforce semiconductor leadership amid competition from China.


UK and USA sign Tech Prosperity Deal

The USA and the UK have signed a Technology Prosperity Deal to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation.

On AI, the deal expands joint research programs, compute access, and datasets in areas like biotechnology, precision medicine, fusion, and space. It also aligns policies, strengthens standards, and deepens ties between the UK AI Security Institute and the US Center for AI Standards and Innovation to promote secure adoption.

On quantum, the countries will establish a benchmarking task force, launch a Quantum Code Challenge to mobilise researchers, and harness AI and high-performance computing to accelerate algorithm development and system readiness. A US-UK Quantum Industry Exchange Program will spur adoption across defence, health, finance, and energy.

The agreement also reinforces foundations for innovation, including research security, 6G development, resilient telecoms and navigation systems, and mobilising private capital for critical technologies.

The deal was signed during a state visit by President Trump to the UK. Also present: OpenAI’s Sam Altman, Nvidia’s Jensen Huang, Microsoft’s Satya Nadella, and Apple’s Tim Cook. 

Microsoft pledged $30bn over four years in the UK, its largest-ever UK commitment. Half will go into capital expenditure for AI and cloud datacentres, the rest into operations like research and sales. 

Nscale, OpenAI and Nvidia will develop a platform that will deploy OpenAI’s technology in the UK. Nvidia will channel £11bn in value into UK AI projects by supplying up to 120,000 Blackwell GPUs, data centre builds, and supercomputers. It is also directly investing £500m in Nscale. 

‘This is the week that I declare the UK will be an AI superpower’, Jensen Huang told BBC News

Missing from the deal? The UK’s Digital Services Tax (DST), which remains set at 2% and was previously reported to be part of the negotiations, along with copyright issues linked to AI training.


The digital playground gets a fence and a curfew

In response to rising concerns over the impact of AI and social media on teenagers, governments and tech companies are implementing new measures to enhance online safety for young users.

Australia has released its regulatory guidance for the incoming nationwide ban on social media access for children under 16, effective 10 December 2025. The legislation requires platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms must detect and remove underage accounts, communicating clearly with affected users. Platforms are also expected to block attempts to re-register. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

French lawmakers are proposing stricter regulations on teen social media use, including mandatory nighttime curfews. A parliamentary report suggests that social media accounts for 15- to 18-year-olds should be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues. This proposal follows concerns about the psychological impact of platforms like TikTok on minors. 

In the USA, the Federal Trade Commission (FTC) has launched an investigation into the safety of AI chatbots, focusing on their impact on children and teenagers. Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships. Not long after, grieving parents have testified before the US Congress, urging lawmakers to regulate AI chatbots after their children died by suicide or self-harmed following interactions with these tools. 

OpenAI has introduced a specialised version of ChatGPT tailored for teenagers, incorporating age-prediction technology to restrict access to the standard version for users under 18. Where uncertainty exists, it will assume the user is a teenager. If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities. This initiative aims to address growing concerns about the mental health risks associated with AI chatbots, while also raising concerns related to issues such as privacy and freedom of expression. 

The intentions are largely good, but a patchwork of bans, curfews, and algorithmic surveillance just underscores that the path forward is unclear. Meanwhile, the kids are almost certainly already finding the loopholes.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow.

The Human Rights Council

The Human Rights Council discussed a report on the human rights implications of new and emerging technologies in the military domain on 18 September. Prepared by the Human Rights Council Advisory Committee, the report recommends, among other measures, that ‘states and international organizations should consider adopting binding or other effective measures to ensure that new and emerging technologies in the military domain whose design, development or use pose significant risks of misuse, abuse or irreversible harm – particularly where such risks may result in human rights violations – are not developed, deployed or used’.

WTO Public Forum 2025

WTO’s largest outreach event, the WTO Public Forum, took place from 17 to 18 September under the Theme ‘Enhance, Create and Preserve’. Digital issues were high on the agenda this year, with sessions dedicated to AI and trade, digital resilience, the moratorium on customs duties on electronic transmissions, and e-commerce negotiations, for example. Other issues were also salient, such as the uncertainty created by rising tariffs and the need for WTO reform. During the Forum, the WTO launched the 2025 World Trade Report, under the title ‘Making trade and AI work together to the benefit of all’. The report explores AI’s potential to boost global trade, particularly through digitally deliverable services. It argues that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries, but warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.

CSTD WG on data governance

The third meeting of the UN CSTD on data governance (WGDG) took place on 15-16 September. The focus of this meeting was on the work being carried out in the four working tracks of the WGDG: 1. principles of data governance at all levels; 2. interoperability between national, regional and international data systems; 3. considerations of sharing the benefits of data; 4. facilitation of safe, secure and trusted data flows, including cross border data flows.

WGDG members reviewed the synthesis reports produced by the CSTD Secretariat, based on the responses to questionnaires proposed by the co-facilitators of working tracks. The WGDG decided to postpone the deadline for contributions to 7 October. More information can be found in the ‘call for contributions’ on the website of the WGDG.


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next two weeks at the UN will be packed with high-level discussions on advancing digital cooperation and AI governance. 

The general debate, from 23 to 29 September, will gather heads of state, ministers, and global leaders to tackle pressing challenges—climate change, sustainable development, and international peace—under the theme ‘Better together: 80 years and more for peace, development and human rights.’ Diplo and the Geneva Internet Platform will track digital and AI-related discussions using a hybrid of expert analysis and AI tools, so be sure to bookmark our dedicated web page.

On 22 September, the UN Office for Digital and Emerging Technologies (ODET) will host Digital Cooperation Day, marking the first anniversary of the Global Digital Compact. Leaders from government, the private sector, civil society, and academia will explore inclusive digital economies, AI governance, and digital public infrastructure through panels, roundtables, and launches.

On 23 September, ITU and UNDP will host Digital@UNGA 2025: Digital for Good – For People and Prosperity at UN Headquarters. The anchor event will feature high-level discussions on digital inclusion, trust, rights, and equity, alongside showcases of initiatives such as the AI Hub for Sustainable Development. Complementing this gathering, affiliate sessions throughout the week will explore future internet governance, AI for the SDGs, digital identity, green infrastructure in Africa, online trust in the age of AI, climate early-warning systems, digital trade, and space-based connectivity. 

A major highlight will be the launch of the Global Dialogue on AI Governance on 25 September. Set to have its first meeting in 2026 along with the AI for Good Summit in Geneva, the dialogue’s main task – as decided by the UN General Assembly – is to facilitate open, transparent and inclusive discussions on AI governance.



READING CORNER
Origins of AI 1

Ever wonder how AI really works? Discover its journey from biological neurons to deep learning and the breakthrough paper that transformed modern artificial intelligence.

ai hallucinations cover chatgpt

Hallucinations in AI can look like facts. Learn how flawed incentives and vague prompts create dangerous illusions.

OpenAI study shows ChatGPT adoption widening

OpenAI has released the largest study to date on how people use ChatGPT, based on 1.5 million anonymised conversations.

The research shows adoption is widening across demographics, with women now making up more than half of identified users. Growth has been particularly strong in low- and middle-income countries, where adoption rates have risen over four times faster than in wealthier nations.

Most ChatGPT conversations focus on practical help, with three-quarters of usage related to tasks such as writing, seeking information, or planning. Around half of interactions are advisory in nature, while a smaller share is devoted to coding or personal expression.

The study found 30% of consumer usage is work-related, with the rest tied to personal activities. Researchers argue the AI tool boosts productivity and supports decision-making, creating value not always captured by economic measures like GDP.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI sets new rules for teen safety in AI use

OpenAI has outlined a new framework for balancing safety, privacy and freedom in its AI systems, with a strong focus on teenagers.

The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.

At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.

The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.

In some regions, identity verification may also be required to confirm age, a step the company admits reduces privacy but argues is essential for protecting minors.

Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.

If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.

The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.

Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Weekly #229 Von der Leyen declares Europe’s ‘Independence Moment’

 Logo, Text

5 – 12 September 2025


Dear readers,

‘Europe is in a fight,’ European Commission President Ursula von der Leyen declared as she opened her 2025 State of the Union speech. Addressing the European Parliament in Strasbourg, von der Leyen noted that ‘Europe must fight. For its place in a world in which many major powers are either ambivalent or openly hostile to Europe.’ In response, she argued for Europe’s ‘Independence Moment’ – a call for strategic autonomy.

One of the central pillars of her plan? A major push to invest in digital and clean technologies. Let’s explore the details we’ve heard in the speech.

 Book, Comics, Publication, Adult, Female, Person, Woman, Clothing, Coat, Face, Head

The EU plans measures to support businesses and innovation, including a digital Euro and an upcoming omnibus on digital. Many European startups in key technologies like quantum, AI, and biotech seek foreign investment, which jeopardises the EU’s tech sovereignty, the speech notes. In response, the Commission will launch a multi-billion-euro Scaleup Europe Fund with private partners. 

The Single Market remains incomplete, von der Leyen noted, mostly in three domains: finance, energy, and telecommunications. A Single Market Roadmap to 2028 will be presented, which will provide clear political deadlines.

Standing out in the speech was von der Leyen’s defence of Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US administration.

The EU needs ‘a European AI’, von der Leyen noted. Key initiatives include the Cloud and AI Development Act, the Quantum Sandbox, and the creation of European AI Gigafactories to help startups develop, train, and deploy next-generation AI models. 

Additionally, CEOs of Europe’s leading tech companies will present their European AI & Tech Declaration, pledging to invest in and strengthen Europe’s tech sovereignty, von der Leyen stated.

Europe should consider implementing guidelines or limits for children’s social media use, von der Leyen noted. She pointed to Australia’s pioneering social media restrictions as a model under observation, indicating that Europe could adopt a similar approach. To ensure a well-informed and balanced policy, she announced plans to commission a panel of experts by the end of the year to advise on the best strategies for Europe.

Von der Leyen’s bet is that a potent mix of massive investment, streamlined regulation, and a unified public-private front can finally stop Europe from playing catch-up in the global economic race.

History is on her side in one key regard: when the EU and corporate champions unite, they win big on setting global standards, and GSM is just one example. But past glory is no guarantee of future success. The rhetoric is sharp, and the stakes are existential. Now, the pressure is on to deliver more than just a powerful speech.


IN OTHER NEWS THIS WEEK

The world’s eyes turned to Nepal this week, where authorities banned 26 social media platforms for 24 hours after nationwide protests, led largely by youth, against corruption. According to officials, the ban was introduced in an effort to curb misinformation, online fraud, and hate speech. The ban has been lifted after the protests intensified and left 22 people dead. The events are likely to offer lessons for other governments grappling with the role of censorship during times of unrest.

Another country fighting corruption is Albania, using unusual means – the government made a pioneering move by introducing the world’s first AI-powered public official, named Diella. Appointed to oversee public procurement, the virtual minister represents an attempt to use technology itself to create a more transparent and efficient government, with the goal of ensuring procedures are ‘100% incorruptible.’ A laudable goal, but AI is only as unbiased as the data and algorithms it’s relying on. Still, it’s a daring first step. 

Speaking of AI (and it seems we speak of little else these days), another nation is trying its best to adapt to the global transformation driven by rapid digitalisation and AI. Kazakhstan has announced an ambitious goal: to become a fully digital country within three years.

The central policy is the establishment of a new Ministry of Artificial Intelligence and Digital Development, which will ensure the total implementation of AI to modernise all sectors of the economy. This effort will be guided by a national strategy called ‘Digital Kazakhstan’ to combine all digital initiatives.

A second major announcement was the development of Alatau City, envisioned as the country’s innovation hub. Planned as the region’s first fully digital city, it will integrate Smart City technologies, allow cryptocurrency payments, and is being developed with the expertise of a leading Chinese company that helped build Shenzhen.

Has Kazakhstan bitten off more than it can chew in 3 years’ time? Even developing a national strategy can take years; implementing AI across every sector of the economy is exponentially more complex. Kazakhstan has dared to dream big; now it must work hard to achieve it.

AI’s ‘magic’ comes with a price. Authors sued Apple last Friday for allegedly training its AI on their copyrighted books. In a related development, AI company Anthropic agreed to a massive $1.5 billion settlement for a similar case – what plaintiffs’ lawyers are calling the largest copyright recovery in history, even though the company admitted no fault. Will this settlement mark a dramatic shift in how AI companies operate? Without a formal court ruling, it creates no legal precedent. For now, the slow grind of the copyright fight continues.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow. 

At the International Telecommunication Union (ITU), the Council Working Group (CWG) on WSIS and SDGs met on Tuesday and Wednesday to look at the work undertaken by ITU with regard to the implementation of WSIS outcomes and the Agenda 2030 and to discuss issues related to the ongoing WSIS+20 review process.

As we write this newsletter, the Expert Group on ITRs is working on the final report it needs to submit to the ITU Council in response to the task it was given to review the International Telecommunication Regulations (ITRs), considering evolving global trends, tech developments, and current regulatory practices.

A draft version of the report notes that members have divergent views on whether the ITRs need revision and even on their overall relevance; there also doesn’t seem to be a consensus on whether and how the work on revising the ITRs should continue. On another topic, the CWG on international internet-related public policy issues is holding an open consultation on ensuring meaningful connectivity for landlocked developing countries. 

Earlier in the week, the UN Institute for Disarmament Research (UNIDIR) hosted the Outer Space Security Conference, bringing together diplomats, policy makers, private actors, experts from the military sectors and others to look at ways in which to shape a secure, inclusive and sustainable future for outer space.

Some of the issues discussed revolved around the implications of using emerging technologies such as AI and autonomous systems in the context of space technology and the cybersecurity challenges associated with such uses. 


IN CASE YOU MISSED IT
UN Cyber Dialogue 2025 web
www.diplomacy.edu

The session brought together discussants to offer diverse perspectives on how the OEWG experience can inform future global cyber negotiations.

African priorities for GDC
www.diplomacy.edu

African priorities for the Global Digital Compact In 2022 the idea of a Global Digital Compact was floated by the UN with the intention of developing shared


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next meeting of the UN’s ‘Multi-Stakeholder Working Group on Data Governance’ is scheduled for 15-16 September in Geneva and is open to observers (both onsite and online).

In a recent event, experts from Diplo, the Open Knowledge Foundation (OKFN), and the Geneva Internet Platform analysed the Group’s progress and looked ahead to the September meeting. Catch up on the discussion and watch the full recording.

The 2025 WTO Public Forum will be held on 17–18 September in Geneva, and carries the theme ‘Enhance, Create, and Preserve.’ The forum aims to explore how digital advancements are reshaping global trade norms.

The agenda includes sessions that dig into the opportunities posed by e-commerce (such as improving connectivity, opening pathways for small businesses, and increasing market inclusivity), but also shows awareness of the risks – fragmentation of the digital space, uneven infrastructure, and regulatory misalignment, especially amid geopolitical tensions. 

The Human Rights Council started its 60th session, which will continue until 8 October. A report on privacy in the digital age by OHCHR will be discussed next Thursday, 18 September. It looks at challenges and risks with regard to discrimination and the unequal enjoyment of the right to privacy associated with the collection and processing of data, and offers some recommendations on how to prevent digitalisation from perpetuating or deepening discrimination and exclusion.

Among these are a recommendation for states to protect individuals from human rights abuses linked to corporate data processing and to ensure that digital public infrastructures are designed and used in ways that uphold the rights to privacy, non-discrimination and equality.



READING CORNER
Crtez Monthly 102 ver II

This summer saw power plays over US chips and China’s minerals, alongside the global AI race with its competing visions. Lessons of disillusionment and clarity reframed AI’s trajectory, while digital intrusions continued to reshape geopolitics. And in New York, the UN took a decisive step toward a permanent cybersecurity mechanism. 

EU digital flag GOOD

eIDAS 2 and the European Digital Identity Wallet aim to secure online interactions, reduce bureaucracy, and empower citizens across the EU with a reliable and user-friendly digital identity.

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!