OpenAI and AMD strike 6GW GPU deal to power next-generation AI infrastructure

AMD and OpenAI have announced a strategic partnership to deploy up to six gigawatts of AMD GPUs, marking one of the largest AI compute collaborations.

The multi-year agreement will begin with the rollout of one gigawatt of AMD Instinct MI450 GPUs in the second half of 2026, with further deployments planned across future AMD generations.

A deal that deepens a long-standing relationship between the two companies began with AMD’s MI300X and MI350X series.

OpenAI will adopt AMD as a core strategic compute partner, integrating its technology into large-scale AI systems and jointly optimising product roadmaps to support next-generation AI workloads.

To strengthen alignment, AMD has issued OpenAI a warrant for up to 160 million shares, with tranches vesting as the partnership achieves deployment and share-price milestones. AMD expects the collaboration to deliver tens of billions in revenue and boost its non-GAAP earnings per share.

AMD CEO Dr Lisa Su called the deal ‘a true win-win’ for both companies, while OpenAI’s Sam Altman said the partnership will ‘accelerate progress and bring advanced AI benefits to everyone faster’.

The collaboration positions AMD as a leading hardware supplier in the race to build global-scale AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI backs policy push for Europe’s AI uptake

OpenAI and Allied for Startups have released Hacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead of the Commission’s Apply AI Strategy.

The report emerged from a Brussels policy hackathon with 65 participants from EU bodies, governments, enterprises and startups, proposing measures such as an Individual AI Learning Account, an AI Champions Network for SMEs, a European GovAI Hub and relentless harmonisation.

OpenAI highlights strong European demand and uneven workplace uptake, citing sector gaps and the need for targeted support, while pointing to initiatives like OpenAI Academy to widen skills.

Broader policy momentum is building, with the EU preparing an Apply AI Strategy to boost homegrown tools and cut dependencies, reinforcing the push for practical deployment across public services and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI industry faces recalibration as Altman delays AGI

OpenAI CEO Sam Altman has again adjusted his timeline for achieving artificial general intelligence (AGI). After earlier forecasts for 2023 and 2025, Altman suggests 2030 as a more realistic milestone. The move reflects mounting pressure and shifting expectations in the AI sector.

OpenAI’s public projections come amid challenging financials. Despite a valuation near $500 billion, the company reportedly lost $5 billion last year on $3.7 billion in revenue. Investors remain drawn to ambitious claims of AGI, despite widespread scepticism. Predictions now span from 2026 to 2060.

Experts question whether AGI is feasible under current large language model (LLM) architectures. They point out that LLMs rely on probabilistic patterns in text, lack lived experience, and cannot develop human judgement or intuition from data alone.

Another point of critique is that text-based models cannot fully capture embodied expertise. Fields like law, medicine, or skilled trades depend on hands-on training, tacit knowledge, and real-world context, where AI remains fundamentally limited.

As investors and commentators calibrate expectations, the AI industry may face a reckoning. Altman’s shifting forecasts underscore how hype and uncertainty continue to shape the race toward perceived machine-level intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Weekly #232 The rise of AI slop: When social media turns more artificial

 Logo, Text

26 September – 3 October 2025


HIGHLIGHT OF THE WEEK

The rise of AI slop: When social media turns more artificial

Last Thursday, Meta quietly introduced Vibes, a new short-form video feed in the Meta AI app, wholly powered by AI. Rather than spotlighting real creators or grassroots content, the feed is built around synthetic content. 

This Tuesday, OpenAI revealed Sora, a companion app centred on AI-created short videos, complete with ‘cameo’ features letting people insert their own faces (with permission) into generative scenes.

From the outside, both Vibes and Sora look like competitive copies of TikTok or Reels — only their entire content pipeline is synthetic. 

They are the first dedicated firehoses of what has been officially termed ‘AI slop.’ This phrase, added to the Cambridge Dictionary in July 2025 and defined as ‘content on the internet that is of very low quality, especially when it is created by AI,’ perfectly captures the core concern.

Across the tech world, reactions ranged from bemused to alarmed. Because while launching a new social media product is hardly radical, creating a platform whose entire video ecosystem is synthetic — devoid of human spark — is something else entirely. 

Why is it concerning? Because it blurs the line between real and fake, making it hard to trust what you see. It can copy creators’ work without permission and flood feeds with shallow, meaningless videos that grab attention but add little value. Algorithms exploit user preferences, while features like synthetic cameos can be misused for bullying or identity abuse. And then there’s also the fact that AI clips typically lack human stories and emotion, eroding authenticity.

What’s next? Ultimately, this shift to AI-generated content raises a philosophical question: What is the purpose of our shared digital spaces?

As we move forward, perhaps we need to approach this new landscape more thoughtfully — embracing innovation where it serves us, but always making space for the authentic, the original, and the human.

For now, Vibes and Sorra have not yet been rolled out worldwide. Given the tepid response from early adopters, their success is far from guaranteed. Ultimately, their fate hinges entirely on the extent to which people will use them.

 Book, Comics, Publication, Person, Face, Head

IN OTHER NEWS THIS WEEK

UNGA80 turns spotlight on digital issues and AI governance

In our previous newsletter, published on Friday, we covered all developments at the UNGA80 up to that day. In this edition, we bring you everything that unfolded from Friday through Monday.

On Friday, AI governance, digital cooperation, and the critical issue of child safety in the digital space stood out in the statements. Member states underlined that the transformative potential of AI for development – from the green energy transition to improved public services – is inextricably linked to the urgent need for global governance. Several leaders welcomed the new AI mechanisms established by UNGA, while others called for new frameworks to manage risks, particularly those related to cybercrime, disinformation, and the mental health of youth. A recurring theme was the need to actively address the digital divide through investments in digital infrastructure, skills, and technology transfer, stressing that the benefits of this new era must be shared fairly with all. The discussions reinforced the message that tackling these complex, interconnected challenges requires mature multilateralism and reinforced international cooperation.

On Saturday, several statements highlighted the importance of d harnessing AI and digital technologies for development, security, and inclusive growth. Delegates emphasised responsible AI governance, ethical frameworks, and international norms to manage risks, including in military applications. The need for equitable access to AI, digital literacy, and capacity building for developing countries was highlighted to bridge technological and social divides. Participants also addressed cybersecurity, disinformation, and the influence of global tech corporations, emphasising the importance of multilateral cooperation and human-centric approaches. One echoing message was that leveraging AI and digital innovation responsibly can drive sustainable development, economic autonomy, and long-term prosperity for all.

On Monday, the transformative potential and urgent risks associated with AI continued to be highlighted. While AI can boost development, health, education, and productivity – especially in least developed countries – it must be governed responsibly to prevent inequality, bias, and insecurity. Calls for a global AI framework were echoed in various statements, alongside broader appeals for inclusive digital cooperation, accelerated technology transfer, and investment in infrastructure, literacy, and talent development. Speakers warned that digital disruption is deepening geopolitical divides, with smaller and developing nations demanding a voice in shaping emerging governance regimes. Bridging the digital divide, advancing secure and rights-based technologies, and protecting against cybercrime were framed as essential.

The bigger picture: A comprehensive coverage of UNGA80 can be found on our dedicated web page.


Chips and sovereignty: From globalisation to guarded autonomy

The global race for semiconductor dominance is accelerating, with both the EU and Taiwan asserting tighter control over their technological assets in response to growing US pressure.

EU member states have called for a revised and more assertive EU Chips Act, arguing that Europe must treat semiconductors as a strategic industry on par with aerospace and defence. The signatories — representing all 27 EU economies — warn that while competitors like the US and Asia are rapidly scaling public investment, Europe risks falling behind unless it strengthens its domestic ecosystem across R&D, design, manufacturing, and workforce development.

The proposed ‘second-phase Chips Act’ is built around three strategic objectives:

  • Prosperity, through a competitive and innovation-led semiconductor economy
  • Indispensability, by securing key control points in the value chain
  • Resilience, to guarantee supply for critical sectors during geopolitical shocks.

The EU’s message is clear: Europe intends not just to participate in the semiconductor industry, but to shape it on its own terms, backed by coordinated investment, industrial alliances, and international partnerships that reinforce — rather than dilute — strategic autonomy.

That same theme of sovereignty defines Taiwan’s position.

Amid negotiations with Taiwan, US Commerce Secretary Howard Lutnick floated a proposal that only half of America’s chips should be produced in Taiwan, relocating the other half to the USA, to reduce dependence on a single foreign supplier. But Taiwan’s Vice Premier Cheng Li-chiun dismissed the idea outright, stating that such terms were never part of formal talks and would not be accepted. 

While Taiwan is willing to deepen commercial ties with the US, it refuses to relinquish control over the advanced semiconductor capabilities that underpin its geopolitical leverage.

The bottom line: The age of supplier nations is over; The age of semiconductor sovereignty has begun. The message is the same on both sides of the Atlantic: chips are too critical to trust to someone else.


From code to court: xAI vs OpenAI and Apple

In the high-stakes arena of AI, a bitter rivalry is now unfolding in courtrooms.

Elon Musk’s AI venture xAI has launched an aggressive new lawsuit against OpenAI, accusing it of orchestrating a coordinated ‘poaching’ campaign to steal proprietary technology. xAI claims that OpenAI recruiters targeted engineers who then illicitly transferred source code, data-centre playbooks, and training methodologies to further OpenAI’s competitive edge. 

According to xAI, key incidents included employees uploading confidential files to personal devices, and repeated AirDrop transfers — behaviour that Musk’s company says amounts to trade secret misappropriation. Their remedy: damages, injunctions, and orders compelling OpenAI to purge models built on the contested materials. 

OpenAI, however, fired back. In court filings earlier this week, it asked a judge to dismiss xAI’s claims, calling them part of Musk’s ‘ongoing harassment’ of the company. OpenAI contends that xAI employees are free to leave and be hired elsewhere, and that xAI’s allegations are unsubstantiated. 

But the conflict doesn’t stop there.

This August, Musk had accused Apple of colluding with OpenAI to block competition — alleging that Apple disadvantaged the Grok chatbot (developed by xAI) in its App Store rankings precisely to favour OpenAI’s ChatGPT. 

Apple and OpenAI have responded together this week in court, asking a federal judge to dismiss this separate antitrust-style claim. Their defence is blunt: the agreement between Apple and OpenAI is explicitly non-exclusive, and Apple retains the freedom to work with other AI providers. Further, they argue, xAI has failed to plausibly show how embedding ChatGPT into Apple devices has harmed competition.

What’s behind all this? The ferocious race for talent, technological leadership, and market dominance in AI. We’ll see how it pans out.

LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head
SSF ITU

On 7–8 October 2025, the ITU will host the Space Sustainability Forum (SSF-25), gathering experts, regulators, industry leaders, and policy makers to address the long-term health, security and governance of outer space.

Swiss IGF

On 9 October 2025, policymakers, researchers, industry representatives, and civil society actors will gather at Welle 7 in Bern, in situ and online, for the Swiss Internet Governance Forum 2025.

SEEDIG 10

With the theme ‘A Decade of Dialogue and Cooperation: What’s Next?‘, the anniversary edition of the South Eastern European Dialogue on Internet Governance (SEEDIG) is designed as both a stocktaking exercise and a forward-looking consultation. 



READING CORNER
quantum ai data science and cybersecurity

Quantum internet is emerging not only as a scientific milestone but as a transformative force that could redefine how governments, healthcare systems, and citizens interact in the digital age.

UNGA

The annual General Debate at UNGA is the stage where countries outline their strategic priorities, concerns, and proposals. Overall, the sentiment of the General Debate can be distilled into three key words: echo, gloom, and hope.

Japan and OpenAI team up for public sector AI innovation

Japan’s Digital Agency partners with OpenAI to integrate AI into public services, enhancing efficiency and innovation. Gennai, an OpenAI-powered tool, will enable government employees to explore innovative public sector applications, supporting Japan’s modern governance vision.

The collaboration supports Japan’s leadership in the Hiroshima AI Process, backed by the OECD and G7. The framework sets global AI guidelines, ensuring safety, security, and trust while promoting inclusive governance across governments, industry, academia, and civil society in Asia and beyond.

OpenAI is committed to meeting Japan’s rigorous standards and pursuing ISMAP certification to ensure secure and reliable AI use in government operations. The partnership strengthens trust and transparency in AI deployment, aligning with Japan’s national policies.

OpenAI plans to strengthen ties with Japanese authorities, educational institutions, and industry stakeholders. The collaboration seeks to integrate AI into society responsibly, prioritising safety, transparency, and global cooperation for sustainable benefits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How OpenAI designs Sora’s recommendation feed for creativity and safety

OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.

To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.

Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.

OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.

In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.

The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.

On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.

OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Instant Checkout to enable in-chat purchases

OpenAI has launched Instant Checkout, a feature that lets users make direct purchases within ChatGPT. The initial rollout applies to US Etsy sellers, with Shopify merchants to follow.

The system is powered by the Agentic Commerce Protocol, which OpenAI co-developed with Stripe, and currently supports single-item purchases. Future updates will add multi-item carts and expand to more regions.

According to OpenAI, product results in ChatGPT are organic and ranked for relevance. The e-commerce framework will be open-sourced to accelerate integrations for merchants and developers. Users can pay using cards already on file, and transactions involve explicit confirmation steps, scoped payment tokens, and limited data sharing to build trust.

Michelle Fradin, OpenAI’s product lead for ChatGPT commerce, said the goal is to move beyond information retrieval and support real-world actions. Stripe’s president for technology and business, Will Gaybrick, described the partnership as laying economic infrastructure for AI.

Merchants will pay a small fee on completed purchases, while users are not charged extra and product prices remain unchanged.

Reuters reported that Etsy and Shopify’s stocks rose significantly following the announcement, with Etsy closing up nearly 16 percent and Shopify more than 6 percent. The company plans to extend the system to more merchants and payment types over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!