OpenAI sets new rules for teen safety in AI use

OpenAI has outlined a new framework for balancing safety, privacy and freedom in its AI systems, with a strong focus on teenagers.

The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.

At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.

The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.

In some regions, identity verification may also be required to confirm age, a step the company admits reduces privacy but argues is essential for protecting minors.

Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.

If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.

The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.

Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Weekly #229 Von der Leyen declares Europe’s ‘Independence Moment’

 Logo, Text

5 – 12 September 2025


Dear readers,

‘Europe is in a fight,’ European Commission President Ursula von der Leyen declared as she opened her 2025 State of the Union speech. Addressing the European Parliament in Strasbourg, von der Leyen noted that ‘Europe must fight. For its place in a world in which many major powers are either ambivalent or openly hostile to Europe.’ In response, she argued for Europe’s ‘Independence Moment’ – a call for strategic autonomy.

One of the central pillars of her plan? A major push to invest in digital and clean technologies. Let’s explore the details we’ve heard in the speech.

 Book, Comics, Publication, Adult, Female, Person, Woman, Clothing, Coat, Face, Head

The EU plans measures to support businesses and innovation, including a digital Euro and an upcoming omnibus on digital. Many European startups in key technologies like quantum, AI, and biotech seek foreign investment, which jeopardises the EU’s tech sovereignty, the speech notes. In response, the Commission will launch a multi-billion-euro Scaleup Europe Fund with private partners. 

The Single Market remains incomplete, von der Leyen noted, mostly in three domains: finance, energy, and telecommunications. A Single Market Roadmap to 2028 will be presented, which will provide clear political deadlines.

Standing out in the speech was von der Leyen’s defence of Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US administration.

The EU needs ‘a European AI’, von der Leyen noted. Key initiatives include the Cloud and AI Development Act, the Quantum Sandbox, and the creation of European AI Gigafactories to help startups develop, train, and deploy next-generation AI models. 

Additionally, CEOs of Europe’s leading tech companies presented their European AI & Tech Declaration, pledging to invest in and strengthen Europe’s tech sovereignty.

Europe should consider implementing guidelines or limits for children’s social media use, von der Leyen noted. She pointed to Australia’s pioneering social media restrictions as a model under observation, indicating that Europe could adopt a similar approach. To ensure a well-informed and balanced policy, she announced plans to commission a panel of experts by the end of the year to advise on the best strategies for Europe.

Von der Leyen’s bet is that a potent mix of massive investment, streamlined regulation, and a unified public-private front can finally stop Europe from playing catch-up in the global economic race.

History is on her side in one key regard: when the EU and corporate champions unite, they win big on setting global standards, and GSM is just one example. But past glory is no guarantee of future success. The rhetoric is sharp, and the stakes are existential. Now, the pressure is on to deliver more than just a powerful speech.


IN OTHER NEWS THIS WEEK

The world’s eyes turned to Nepal this week, where authorities banned 26 social media platforms for 24 hours after nationwide protests, led largely by youth, against corruption. According to officials, the ban was introduced in an effort to curb misinformation, online fraud, and hate speech. The ban has been lifted after the protests intensified and left 22 people dead. The events are likely to offer lessons for other governments grappling with the role of censorship during times of unrest.

Another country fighting corruption is Albania, using unusual means – the government made a pioneering move by introducing the world’s first AI-powered public official, named Diella. Appointed to oversee public procurement, the virtual minister represents an attempt to use technology itself to create a more transparent and efficient government, with the goal of ensuring procedures are ‘100% incorruptible.’ A laudable goal, but AI is only as unbiased as the data and algorithms it’s relying on. Still, it’s a daring first step. 

Speaking of AI (and it seems we speak of little else these days), another nation is trying its best to adapt to the global transformation driven by rapid digitalisation and AI. Kazakhstan has announced an ambitious goal: to become a fully digital country within three years.

The central policy is the establishment of a new Ministry of Artificial Intelligence and Digital Development, which will ensure the total implementation of AI to modernise all sectors of the economy. This effort will be guided by a national strategy called ‘Digital Qazaqstan’ to combine all digital initiatives.

A second major announcement was the development of Alatau City, envisioned as the country’s innovation hub. Planned as the region’s first fully digital city, it will integrate Smart City technologies, allow cryptocurrency payments, and is being developed with the expertise of a leading Chinese company that helped build Shenzhen.

Has Kazakhstan bitten off more than it can chew in 3 years’ time? Even developing a national strategy can take years; implementing AI across every sector of the economy is exponentially more complex. Kazakhstan has dared to dream big; now it must work hard to achieve it.

AI’s ‘magic’ comes with a price. Authors sued Apple last Friday for allegedly training its AI on their copyrighted books. In a related development, AI company Anthropic agreed to a massive $1.5 billion settlement for a similar case – what plaintiffs’ lawyers are calling the largest copyright recovery in history, even though the company admitted no fault. Will this settlement mark a dramatic shift in how AI companies operate? Without a formal court ruling, it creates no legal precedent. For now, the slow grind of the copyright fight continues.


THIS WEEK IN GENEVA

The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow. 

At the International Telecommunication Union (ITU), the Council Working Group (CWG) on WSIS and SDGs met on Tuesday and Wednesday to look at the work undertaken by ITU with regard to the implementation of WSIS outcomes and the Agenda 2030 and to discuss issues related to the ongoing WSIS+20 review process.

As we write this newsletter, the Expert Group on ITRs is working on the final report it needs to submit to the ITU Council in response to the task it was given to review the International Telecommunication Regulations (ITRs), considering evolving global trends, tech developments, and current regulatory practices.

A draft version of the report notes that members have divergent views on whether the ITRs need revision and even on their overall relevance; there also doesn’t seem to be a consensus on whether and how the work on revising the ITRs should continue. On another topic, the CWG on international internet-related public policy issues is holding an open consultation on ensuring meaningful connectivity for landlocked developing countries. 

Earlier in the week, the UN Institute for Disarmament Research (UNIDIR) hosted the Outer Space Security Conference, bringing together diplomats, policy makers, private actors, experts from the military sectors and others to look at ways in which to shape a secure, inclusive and sustainable future for outer space.

Some of the issues discussed revolved around the implications of using emerging technologies such as AI and autonomous systems in the context of space technology and the cybersecurity challenges associated with such uses. 


IN CASE YOU MISSED IT
UN Cyber Dialogue 2025 web
www.diplomacy.edu

The session brought together discussants to offer diverse perspectives on how the OEWG experience can inform future global cyber negotiations.

African priorities for GDC
www.diplomacy.edu

African priorities for the Global Digital Compact In 2022 the idea of a Global Digital Compact was floated by the UN with the intention of developing shared


LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head

The next meeting of the UN’s ‘Multi-Stakeholder Working Group on Data Governance’ is scheduled for 15-16 September in Geneva and is open to observers (both onsite and online).

In a recent event, experts from Diplo, the Open Knowledge Foundation (OKFN), and the Geneva Internet Platform analysed the Group’s progress and looked ahead to the September meeting. Catch up on the discussion and watch the full recording.

The 2025 WTO Public Forum will be held on 17–18 September in Geneva, and carries the theme ‘Enhance, Create, and Preserve.’ The forum aims to explore how digital advancements are reshaping global trade norms.

The agenda includes sessions that dig into the opportunities posed by e-commerce (such as improving connectivity, opening pathways for small businesses, and increasing market inclusivity), but also shows awareness of the risks – fragmentation of the digital space, uneven infrastructure, and regulatory misalignment, especially amid geopolitical tensions. 

The Human Rights Council started its 60th session, which will continue until 8 October. A report on privacy in the digital age by OHCHR will be discussed next Thursday, 18 September. It looks at challenges and risks with regard to discrimination and the unequal enjoyment of the right to privacy associated with the collection and processing of data, and offers some recommendations on how to prevent digitalisation from perpetuating or deepening discrimination and exclusion.

Among these are a recommendation for states to protect individuals from human rights abuses linked to corporate data processing and to ensure that digital public infrastructures are designed and used in ways that uphold the rights to privacy, non-discrimination and equality.



READING CORNER
Crtez Monthly 102 ver II

This summer saw power plays over US chips and China’s minerals, alongside the global AI race with its competing visions. Lessons of disillusionment and clarity reframed AI’s trajectory, while digital intrusions continued to reshape geopolitics. And in New York, the UN took a decisive step toward a permanent cybersecurity mechanism. 

EU digital flag GOOD

eIDAS 2 and the European Digital Identity Wallet aim to secure online interactions, reduce bureaucracy, and empower citizens across the EU with a reliable and user-friendly digital identity.

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and OpenAI drive record $300B investment in cloud for AI

OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.

The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.

Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.

The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.

The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.

Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.

A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pressure mounts as Apple prepares AI search push with Google ties

Apple’s struggles in the AI race have been hard to miss. Its Apple Intelligence launch was disappointing, and its reliance on ChatGPT appeared to be a concession to rivals.

Bloomberg’s Mark Gurman now reports that Apple plans to introduce its AI-powered web search tool in spring 2026. The move would position it against OpenAI and Perplexity, while renewing pressure on Google.

The speculation comes after news that Google may integrate its Gemini AI into Apple devices. During an antitrust trial in April, Google CEO Sundar Pichai confirmed plans to roll out updates later this year.

According to Gurman, Apple and Google finalised an agreement for Apple to test a Google-developed AI model to boost its voice assistant. The partnership reflects Apple’s mixed strategy of dependence and rivalry with Google.

With a strong record for accurate Apple forecasts, Gurman suggests the company hopes the move will narrow its competitive gap. Whether it can outpace Google, especially given Pixel’s strong AI features, remains an open question.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman questions if social media is dominated by bots

OpenAI CEO Sam Altman has sparked debate after admitting he increasingly struggles to distinguish between genuine online conversations and content generated by bots or AI models.

Altman described a ‘strangest experience’ while reading about OpenAI’s Codex model, saying comments instinctively felt fake even though he knew the growth trend was real. He said social media rewards, ‘LLM-speak,’ and astroturfing make communities feel less genuine.

His comments follow an earlier admission that he had never considered the so-called dead internet theory until now, when large language model accounts seemed to be running X. The theory claims bots and artificial content dominate online activity, though evidence of coordinated control is lacking.

Reactions were divided, with some users agreeing that online communities have become increasingly bot-like. Others argued the change reflects shifting dynamics in niche groups rather than fake accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft brings Anthropic AI into Office 365 as OpenAI tensions rise

The US tech giant Microsoft is expanding its AI strategy by integrating Anthropic’s Claude models into Office 365, adding them to apps like Word, Excel and Outlook instead of relying solely on OpenAI.

Internal tests reportedly showed Anthropic’s systems outperforming OpenAI in specific reasoning and data-processing tasks, prompting Microsoft to adopt a hybrid approach while maintaining OpenAI as a frontier partner.

The shift reflects growing strain between Microsoft and OpenAI, with disputes over intellectual property and cloud infrastructure as well as OpenAI’s plans for greater independence.

By diversifying suppliers, Microsoft reduces risks, lowers costs and positions itself to stay competitive while OpenAI prepares for a potential public offering and develops its own data centres.

Anthropic, backed by Amazon and Google, has built its reputation on safety-focused AI, appealing to Microsoft’s enterprise customers wary of regulatory pressures.

Analysts believe the move could accelerate innovation, spark a ‘multi-model era’ of AI integration, and pressure OpenAI to enhance its technology faster.

The decision comes amid Microsoft’s push to broaden its AI ecosystem, including its in-house MAI-1 model and partnerships with firms like DeepSeek.

Regulators are closely monitoring these developments, given Microsoft’s dominant role in AI investment and the potential antitrust implications of its expanding influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!