Weekly #230 Nepal’s Discord democracy: How a banned platform became a ballot box

 Logo, Text

12 – 19 September 2025


HIGHLIGHT OF THE WEEK

In a historic first for democracy, a country has chosen its interim prime minister via a messaging app. 

In early September, Nepal was thrown into turmoil after the government abruptly banned 26 social media platforms, including Facebook, YouTube, X, and Discord, citing failure to comply with registration rules. The move sparked outrage, particularly among the country’s Gen Z, who poured into the streets, accusing officials of corruption. The protests quickly turned deadly.

Within days, the ban was lifted. Nepalis turned to Discord to debate the country’s political future, fact-check rumours and collect nominations for the country’s future leaders. On 12 September, the Discord community organised a digital poll for an interim prime minister, with former Supreme Court Chief Justice Sushila Karki emerging as the winner

Karki was sworn in the same evening. On her recommendation, the President has dissolved parliament, and new elections have been scheduled for 5 March 2026, after which Karki will step down.

 Adult, Male, Man, Person, Art, Graphics, Face, Head, People

However temporary or symbolic, the episode underscored how digital platforms can become political arenas when traditional ones falter. When official institutions lose legitimacy, people will instinctively repurpose the tools at their disposal to build new ones. 


IN OTHER NEWS THIS WEEK

TikTok ban deadline extended to December 2025 as sale negotiations continue

The TikTok saga entered what many see as yet another act in a long-running drama. In early 2024, the US Congress, citing national security risks, passed a law demanding that ByteDance, TikTok’s Chinese parent company, divest control of the app or face a ban in the USA. The law, which had bipartisan support in Congress, was later upheld by the Supreme Court.

A refresher. The US government has long argued that the app’s access to US user data poses significant risks. Why? TikTok is a subsidiary of ByteDance, which is a private Chinese company, possibly subject to the Chinese 2017 National Intelligence Law, which requires any Chinese entity to support, assist, and cooperate with state intelligence work – including, possibly, the transfer of US citizens’ TikTok data to China. On the other hand, TikTok and ByteDance maintain that TikTok operates independently and respects user privacy.

However, the administration under President Trump has been repeatedly postponing enforcement via executive orders.

Economic and trade negotiations with China have been central to the delay. As the fourth round of talks in Madrid coincided with the latest deadline, Trump opted to extend the deadline again — this time until 16 December 2025 — giving TikTok more breathing room. 

The talks in Madrid have revolved around a potential ‘framework deal’ under which TikTok would be sold or restructured in a way that appeases US concerns, but retain certain ‘Chinese characteristics.’

What do officials say is in the deal? 

  • TikTok’s algorithm: According to Wang Jingtao, deputy director of China’s Central Cyberspace Affairs Commission, there was consensus on authorisation of ‘the use of intellectual property rights such as (TikTok’s) algorithm’ — a main sticking point in the deal.
  • US user data: According to Wang Jingtao, the sides also agreed on entrusting a partner with handling US user data and content security.

What else is reported to be in the deal?

  • A new recommendation algorithm licensed from TikTok parent ByteDance
  • Creating a new company to run TikTok’s US operations and/or creating a new app for US users to move to
  • A consortium of US investors, including Oracle, Silver Lake, and Andreessen Horowitz, would own 80% of the business, with 20% held by Chinese shareholders.
  • The new company’s board would be mostly American, including one member appointed by the US government.

Trump himself stated that he will speak with Chinese President Xi Jinping on Friday to possibly finalise the deal.

If finalised, this deal could establish a new template for how nations manage foreign technology platforms deemed critical to national security.


China’s counterpunch in the chip war

While TikTok grabs headlines as the most visible symbol of the USA–China digital rivalry, the more consequential battle may be unfolding in the semiconductor sector. Just as Washington extends the deadline for TikTok’s divestiture, Beijing has opened a new line of attack: an anti-dumping probe into US analogue chips.  

Announced by China’s Ministry of Commerce, the probe accuses US firms of ‘lowering and suppressing’ prices in ways that hurt domestic producers. It covers legacy chips built on older 40nm-plus process nodes — not the cutting-edge AI accelerators that dominate geopolitical debates, but the everyday workhorse components that power smart appliances, industrial equipment, and automobiles. These mature nodes account for a massive share of China’s consumption, with US firms supplying more than 40% of the market in recent years.

For China’s domestic industry, the probe is an opportunity. Analysts say it could force foreign suppliers to cede market share to local firms concentrated in Jiangsu and other industrial provinces. At the same time, there are reports that China is asking tech companies to stop purchasing Nvidia’s most powerful processors. And speaking of Nvidia, the company is in the crosshairs again, as China’s State Administration for Market Regulation (SAMR) issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. Depending on the outcome of the investigation, Nvidia could face penalties.

Meanwhile, Washington is tightening its own grip. The USA will require annual license renewals for South Korean firms Samsung and SK Hynix to supply advanced chips to Chinese factories — a reminder that even America’s allies are caught in the middle. 

Last month, the US government acquired a 10% stake in Intel. This week, Nvidia announced a $5 billion investment in Intel to co-develop custom chips with the company. Together, these moves reflect Washington’s broader push to reinforce semiconductor leadership amid competition from China.


UK and USA sign Tech Prosperity Deal

The USA and the UK have signed a Technology Prosperity Deal to strengthen collaboration in frontier technologies, with a strong emphasis on AI, quantum, and the secure foundations needed for future innovation.

On AI, the deal expands joint research programs, compute access, and datasets in areas like biotechnology, precision medicine, fusion, and space. It also aligns policies, strengthens standards, and deepens ties between the UK AI Security Institute and the US Center for AI Standards and Innovation to promote secure adoption.

On quantum, the countries will establish a benchmarking task force, launch a Quantum Code Challenge to mobilise researchers, and harness AI and high-performance computing to accelerate algorithm development and system readiness. A US-UK Quantum Industry Exchange Program will spur adoption across defence, health, finance, and energy.

The agreement also reinforces foundations for innovation, including research security, 6G development, resilient telecoms and navigation systems, and mobilising private capital for critical technologies.

The deal was signed during a state visit by President Trump to the UK. Also present: OpenAI’s Sam Altman, Nvidia’s Jensen Huang, Microsoft’s Satya Nadella, and Apple’s Tim Cook. 

Microsoft pledged $30bn over four years in the UK, its largest-ever UK commitment. Half will go into capital expenditure for AI and cloud datacentres, the rest into operations like research and sales. 

Nscale, OpenAI and Nvidia will develop a platform that will deploy OpenAI’s technology in the UK. Nvidia will channel £11bn in value into UK AI projects by supplying up to 120,000 Blackwell GPUs, data centre builds, and supercomputers. It is also directly investing £500m in Nscale. 

‘This is the week that I declare the UK will be an AI superpower’, Jensen Huang told BBC News

Missing from the deal? The UK’s Digital Services Tax (DST), which remains set at 2% and was previously reported to be part of the negotiations, along with copyright issues linked to AI training.


The digital playground gets a fence and a curfew

In response to rising concerns over the impact of AI and social media on teenagers, governments and tech companies are implementing new measures to enhance online safety for young users.

Australia has released its regulatory guidance for the incoming nationwide ban on social media access for children under 16, effective 10 December 2025. The legislation requires platforms to verify users’ ages and ensure that minors are not accessing their services. Platforms must detect and remove underage accounts, communicating clearly with affected users. Platforms are also expected to block attempts to re-register. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

French lawmakers are proposing stricter regulations on teen social media use, including mandatory nighttime curfews. A parliamentary report suggests that social media accounts for 15- to 18-year-olds should be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues. This proposal follows concerns about the psychological impact of platforms like TikTok on minors. 

In the USA, the Federal Trade Commission (FTC) has launched an investigation into the safety of AI chatbots, focusing on their impact on children and teenagers. Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships. Not long after, grieving parents have testified before the US Congress, urging lawmakers to regulate AI chatbots after their children died by suicide or self-harmed following interactions with these tools. 

OpenAI has introduced a specialised version of ChatGPT tailored for teenagers, incorporating age-prediction technology to restrict access to the standard version for users under 18. Where uncertainty exists, it will assume the user is a teenager. If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities. This initiative aims to address growing concerns about the mental health risks associated with AI chatbots, while also raising concerns related to issues such as privacy and freedom of expression. 

The intentions are largely good, but a patchwork of bans, curfews, and algorithmic surveillance just underscores that the path forward is unclear. Meanwhile, the kids are almost certainly already finding the loopholes.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow.

The Human Rights Council

The Human Rights Council discussed a report on the human rights implications of new and emerging technologies in the military domain on 18 September. Prepared by the Human Rights Council Advisory Committee, the report recommends, among other measures, that ‘states and international organizations should consider adopting binding or other effective measures to ensure that new and emerging technologies in the military domain whose design, development or use pose significant risks of misuse, abuse or irreversible harm – particularly where such risks may result in human rights violations – are not developed, deployed or used’.

WTO Public Forum 2025

WTO’s largest outreach event, the WTO Public Forum, took place from 17 to 18 September under the Theme ‘Enhance, Create and Preserve’. Digital issues were high on the agenda this year, with sessions dedicated to AI and trade, digital resilience, the moratorium on customs duties on electronic transmissions, and e-commerce negotiations, for example. Other issues were also salient, such as the uncertainty created by rising tariffs and the need for WTO reform. During the Forum, the WTO launched the 2025 World Trade Report, under the title ‘Making trade and AI work together to the benefit of all’. The report explores AI’s potential to boost global trade, particularly through digitally deliverable services. It argues that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries, but warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.

CSTD WG on data governance

The third meeting of the UN CSTD on data governance (WGDG) took place on 15-16 September. The focus of this meeting was on the work being carried out in the four working tracks of the WGDG: 1. principles of data governance at all levels; 2. interoperability between national, regional and international data systems; 3. considerations of sharing the benefits of data; 4. facilitation of safe, secure and trusted data flows, including cross border data flows.

WGDG members reviewed the synthesis reports produced by the CSTD Secretariat, based on the responses to questionnaires proposed by the co-facilitators of working tracks. The WGDG decided to postpone the deadline for contributions to 7 October. More information can be found in the ‘call for contributions’ on the website of the WGDG.


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next two weeks at the UN will be packed with high-level discussions on advancing digital cooperation and AI governance. 

The general debate, from 23 to 29 September, will gather heads of state, ministers, and global leaders to tackle pressing challenges—climate change, sustainable development, and international peace—under the theme ‘Better together: 80 years and more for peace, development and human rights.’ Diplo and the Geneva Internet Platform will track digital and AI-related discussions using a hybrid of expert analysis and AI tools, so be sure to bookmark our dedicated web page.

On 22 September, the UN Office for Digital and Emerging Technologies (ODET) will host Digital Cooperation Day, marking the first anniversary of the Global Digital Compact. Leaders from government, the private sector, civil society, and academia will explore inclusive digital economies, AI governance, and digital public infrastructure through panels, roundtables, and launches.

On 23 September, ITU and UNDP will host Digital@UNGA 2025: Digital for Good – For People and Prosperity at UN Headquarters. The anchor event will feature high-level discussions on digital inclusion, trust, rights, and equity, alongside showcases of initiatives such as the AI Hub for Sustainable Development. Complementing this gathering, affiliate sessions throughout the week will explore future internet governance, AI for the SDGs, digital identity, green infrastructure in Africa, online trust in the age of AI, climate early-warning systems, digital trade, and space-based connectivity. 

A major highlight will be the launch of the Global Dialogue on AI Governance on 25 September. Set to have its first meeting in 2026 along with the AI for Good Summit in Geneva, the dialogue’s main task – as decided by the UN General Assembly – is to facilitate open, transparent and inclusive discussions on AI governance.



READING CORNER
Origins of AI 1

Ever wonder how AI really works? Discover its journey from biological neurons to deep learning and the breakthrough paper that transformed modern artificial intelligence.

ai hallucinations cover chatgpt

Hallucinations in AI can look like facts. Learn how flawed incentives and vague prompts create dangerous illusions.

OpenAI study shows ChatGPT adoption widening

OpenAI has released the largest study to date on how people use ChatGPT, based on 1.5 million anonymised conversations.

The research shows adoption is widening across demographics, with women now making up more than half of identified users. Growth has been particularly strong in low- and middle-income countries, where adoption rates have risen over four times faster than in wealthier nations.

Most ChatGPT conversations focus on practical help, with three-quarters of usage related to tasks such as writing, seeking information, or planning. Around half of interactions are advisory in nature, while a smaller share is devoted to coding or personal expression.

The study found 30% of consumer usage is work-related, with the rest tied to personal activities. Researchers argue the AI tool boosts productivity and supports decision-making, creating value not always captured by economic measures like GDP.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI sets new rules for teen safety in AI use

OpenAI has outlined a new framework for balancing safety, privacy and freedom in its AI systems, with a strong focus on teenagers.

The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.

At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.

The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.

In some regions, identity verification may also be required to confirm age, a step the company admits reduces privacy but argues is essential for protecting minors.

Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.

If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.

The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.

Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Weekly #229 Von der Leyen declares Europe’s ‘Independence Moment’

 Logo, Text

5 – 12 September 2025


Dear readers,

‘Europe is in a fight,’ European Commission President Ursula von der Leyen declared as she opened her 2025 State of the Union speech. Addressing the European Parliament in Strasbourg, von der Leyen noted that ‘Europe must fight. For its place in a world in which many major powers are either ambivalent or openly hostile to Europe.’ In response, she argued for Europe’s ‘Independence Moment’ – a call for strategic autonomy.

One of the central pillars of her plan? A major push to invest in digital and clean technologies. Let’s explore the details we’ve heard in the speech.

 Book, Comics, Publication, Adult, Female, Person, Woman, Clothing, Coat, Face, Head

The EU plans measures to support businesses and innovation, including a digital Euro and an upcoming omnibus on digital. Many European startups in key technologies like quantum, AI, and biotech seek foreign investment, which jeopardises the EU’s tech sovereignty, the speech notes. In response, the Commission will launch a multi-billion-euro Scaleup Europe Fund with private partners. 

The Single Market remains incomplete, von der Leyen noted, mostly in three domains: finance, energy, and telecommunications. A Single Market Roadmap to 2028 will be presented, which will provide clear political deadlines.

Standing out in the speech was von der Leyen’s defence of Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US administration.

The EU needs ‘a European AI’, von der Leyen noted. Key initiatives include the Cloud and AI Development Act, the Quantum Sandbox, and the creation of European AI Gigafactories to help startups develop, train, and deploy next-generation AI models. 

Additionally, CEOs of Europe’s leading tech companies will present their European AI & Tech Declaration, pledging to invest in and strengthen Europe’s tech sovereignty, von der Leyen stated.

Europe should consider implementing guidelines or limits for children’s social media use, von der Leyen noted. She pointed to Australia’s pioneering social media restrictions as a model under observation, indicating that Europe could adopt a similar approach. To ensure a well-informed and balanced policy, she announced plans to commission a panel of experts by the end of the year to advise on the best strategies for Europe.

Von der Leyen’s bet is that a potent mix of massive investment, streamlined regulation, and a unified public-private front can finally stop Europe from playing catch-up in the global economic race.

History is on her side in one key regard: when the EU and corporate champions unite, they win big on setting global standards, and GSM is just one example. But past glory is no guarantee of future success. The rhetoric is sharp, and the stakes are existential. Now, the pressure is on to deliver more than just a powerful speech.


IN OTHER NEWS THIS WEEK

The world’s eyes turned to Nepal this week, where authorities banned 26 social media platforms for 24 hours after nationwide protests, led largely by youth, against corruption. According to officials, the ban was introduced in an effort to curb misinformation, online fraud, and hate speech. The ban has been lifted after the protests intensified and left 22 people dead. The events are likely to offer lessons for other governments grappling with the role of censorship during times of unrest.

Another country fighting corruption is Albania, using unusual means – the government made a pioneering move by introducing the world’s first AI-powered public official, named Diella. Appointed to oversee public procurement, the virtual minister represents an attempt to use technology itself to create a more transparent and efficient government, with the goal of ensuring procedures are ‘100% incorruptible.’ A laudable goal, but AI is only as unbiased as the data and algorithms it’s relying on. Still, it’s a daring first step. 

Speaking of AI (and it seems we speak of little else these days), another nation is trying its best to adapt to the global transformation driven by rapid digitalisation and AI. Kazakhstan has announced an ambitious goal: to become a fully digital country within three years.

The central policy is the establishment of a new Ministry of Artificial Intelligence and Digital Development, which will ensure the total implementation of AI to modernise all sectors of the economy. This effort will be guided by a national strategy called ‘Digital Kazakhstan’ to combine all digital initiatives.

A second major announcement was the development of Alatau City, envisioned as the country’s innovation hub. Planned as the region’s first fully digital city, it will integrate Smart City technologies, allow cryptocurrency payments, and is being developed with the expertise of a leading Chinese company that helped build Shenzhen.

Has Kazakhstan bitten off more than it can chew in 3 years’ time? Even developing a national strategy can take years; implementing AI across every sector of the economy is exponentially more complex. Kazakhstan has dared to dream big; now it must work hard to achieve it.

AI’s ‘magic’ comes with a price. Authors sued Apple last Friday for allegedly training its AI on their copyrighted books. In a related development, AI company Anthropic agreed to a massive $1.5 billion settlement for a similar case – what plaintiffs’ lawyers are calling the largest copyright recovery in history, even though the company admitted no fault. Will this settlement mark a dramatic shift in how AI companies operate? Without a formal court ruling, it creates no legal precedent. For now, the slow grind of the copyright fight continues.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow. 

At the International Telecommunication Union (ITU), the Council Working Group (CWG) on WSIS and SDGs met on Tuesday and Wednesday to look at the work undertaken by ITU with regard to the implementation of WSIS outcomes and the Agenda 2030 and to discuss issues related to the ongoing WSIS+20 review process.

As we write this newsletter, the Expert Group on ITRs is working on the final report it needs to submit to the ITU Council in response to the task it was given to review the International Telecommunication Regulations (ITRs), considering evolving global trends, tech developments, and current regulatory practices.

A draft version of the report notes that members have divergent views on whether the ITRs need revision and even on their overall relevance; there also doesn’t seem to be a consensus on whether and how the work on revising the ITRs should continue. On another topic, the CWG on international internet-related public policy issues is holding an open consultation on ensuring meaningful connectivity for landlocked developing countries. 

Earlier in the week, the UN Institute for Disarmament Research (UNIDIR) hosted the Outer Space Security Conference, bringing together diplomats, policy makers, private actors, experts from the military sectors and others to look at ways in which to shape a secure, inclusive and sustainable future for outer space.

Some of the issues discussed revolved around the implications of using emerging technologies such as AI and autonomous systems in the context of space technology and the cybersecurity challenges associated with such uses. 


IN CASE YOU MISSED IT
UN Cyber Dialogue 2025 web
www.diplomacy.edu

The session brought together discussants to offer diverse perspectives on how the OEWG experience can inform future global cyber negotiations.

African priorities for GDC
www.diplomacy.edu

African priorities for the Global Digital Compact In 2022 the idea of a Global Digital Compact was floated by the UN with the intention of developing shared


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next meeting of the UN’s ‘Multi-Stakeholder Working Group on Data Governance’ is scheduled for 15-16 September in Geneva and is open to observers (both onsite and online).

In a recent event, experts from Diplo, the Open Knowledge Foundation (OKFN), and the Geneva Internet Platform analysed the Group’s progress and looked ahead to the September meeting. Catch up on the discussion and watch the full recording.

The 2025 WTO Public Forum will be held on 17–18 September in Geneva, and carries the theme ‘Enhance, Create, and Preserve.’ The forum aims to explore how digital advancements are reshaping global trade norms.

The agenda includes sessions that dig into the opportunities posed by e-commerce (such as improving connectivity, opening pathways for small businesses, and increasing market inclusivity), but also shows awareness of the risks – fragmentation of the digital space, uneven infrastructure, and regulatory misalignment, especially amid geopolitical tensions. 

The Human Rights Council started its 60th session, which will continue until 8 October. A report on privacy in the digital age by OHCHR will be discussed next Thursday, 18 September. It looks at challenges and risks with regard to discrimination and the unequal enjoyment of the right to privacy associated with the collection and processing of data, and offers some recommendations on how to prevent digitalisation from perpetuating or deepening discrimination and exclusion.

Among these are a recommendation for states to protect individuals from human rights abuses linked to corporate data processing and to ensure that digital public infrastructures are designed and used in ways that uphold the rights to privacy, non-discrimination and equality.



READING CORNER
Crtez Monthly 102 ver II

This summer saw power plays over US chips and China’s minerals, alongside the global AI race with its competing visions. Lessons of disillusionment and clarity reframed AI’s trajectory, while digital intrusions continued to reshape geopolitics. And in New York, the UN took a decisive step toward a permanent cybersecurity mechanism. 

EU digital flag GOOD

eIDAS 2 and the European Digital Identity Wallet aim to secure online interactions, reduce bureaucracy, and empower citizens across the EU with a reliable and user-friendly digital identity.

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and OpenAI drive record $300B investment in cloud for AI

OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.

The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.

Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.

The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.

The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.

Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.

A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pressure mounts as Apple prepares AI search push with Google ties

Apple’s struggles in the AI race have been hard to miss. Its Apple Intelligence launch was disappointing, and its reliance on ChatGPT appeared to be a concession to rivals.

Bloomberg’s Mark Gurman now reports that Apple plans to introduce its AI-powered web search tool in spring 2026. The move would position it against OpenAI and Perplexity, while renewing pressure on Google.

The speculation comes after news that Google may integrate its Gemini AI into Apple devices. During an antitrust trial in April, Google CEO Sundar Pichai confirmed plans to roll out updates later this year.

According to Gurman, Apple and Google finalised an agreement for Apple to test a Google-developed AI model to boost its voice assistant. The partnership reflects Apple’s mixed strategy of dependence and rivalry with Google.

With a strong record for accurate Apple forecasts, Gurman suggests the company hopes the move will narrow its competitive gap. Whether it can outpace Google, especially given Pixel’s strong AI features, remains an open question.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman questions if social media is dominated by bots

OpenAI CEO Sam Altman has sparked debate after admitting he increasingly struggles to distinguish between genuine online conversations and content generated by bots or AI models.

Altman described a ‘strangest experience’ while reading about OpenAI’s Codex model, saying comments instinctively felt fake even though he knew the growth trend was real. He said social media rewards, ‘LLM-speak,’ and astroturfing make communities feel less genuine.

His comments follow an earlier admission that he had never considered the so-called dead internet theory until now, when large language model accounts seemed to be running X. The theory claims bots and artificial content dominate online activity, though evidence of coordinated control is lacking.

Reactions were divided, with some users agreeing that online communities have become increasingly bot-like. Others argued the change reflects shifting dynamics in niche groups rather than fake accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot