OpenAI backs policy push for Europe’s AI uptake

OpenAI and Allied for Startups have released Hacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead of the Commission’s Apply AI Strategy.

The report emerged from a Brussels policy hackathon with 65 participants from EU bodies, governments, enterprises and startups, proposing measures such as an Individual AI Learning Account, an AI Champions Network for SMEs, a European GovAI Hub and relentless harmonisation.

OpenAI highlights strong European demand and uneven workplace uptake, citing sector gaps and the need for targeted support, while pointing to initiatives like OpenAI Academy to widen skills.

Broader policy momentum is building, with the EU preparing an Apply AI Strategy to boost homegrown tools and cut dependencies, reinforcing the push for practical deployment across public services and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A new AI strategy by the EU to cut reliance on the US and China

The EU is preparing to unveil a new strategy to reduce reliance on American and Chinese technology by accelerating the growth of homegrown AI.

The ‘Apply AI strategy’, set to be presented by the EU tech chief Henna Virkkunen, positions AI as a strategic asset essential for the bloc’s competitiveness, security and resilience.

According to draft documents, the plan will prioritise adopting European-made AI tools across healthcare, defence and manufacturing.

Public administrations are expected to play a central role by integrating open-source EU AI systems, providing a market for local start-ups and reducing dependence on foreign platforms. The Commission has pledged €1bn from existing financing programmes to support the initiative.

Brussels has warned that foreign control of the ‘AI stack’ (the hardware and software that underpin advanced systems) could be ‘weaponised’ by state and non-state actors.

These concerns have intensified following Europe’s continued dependence on American tech infrastructure. Meanwhile, China’s rapid progress in AI has further raised fears that the Union risks losing influence in shaping the technology’s future.

Several high-potential AI firms have already been hosted by the EU, including France’s Mistral and Germany’s Helsing. However, they rely heavily on overseas suppliers for software, hardware, and critical minerals.

The Commission wants to accelerate the deployment of European AI-enabled defence tools, such as command-and-control systems, which remain dependent on NATO and US providers. The strategy also outlines investment in sovereign frontier models for areas like space defence.

President Ursula von der Leyen said the bloc aims to ‘speed up AI adoption across the board’ to ensure it does not miss the transformative wave.

Brussels hopes to carve out a more substantial global role in the next phase of technological competition by reframing AI as an industrial sovereignty and security instrument.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bezos predicts gigantic gains from the current AI investment bubble

Jeff Bezos has acknowledged that an ‘AI bubble’ is underway but believes its long-term impact will be overwhelmingly positive.

Speaking at Italian Tech Week in Turin, the Amazon founder described it as an ‘industrial bubble’ rather than a purely financial.

He argued that the intense competition and heavy investment will ultimately leave society better off, even if many projects fail. ‘When the dust settles and you see who the winners are, societies benefit from those investors,’ he said, adding that the benefits of AI will be ‘gigantic’.

Bezos’s comments come amid surging spending by Big Tech on AI chips and data centres. Citigroup forecasts that investment will exceed $2.8 trillion by 2029.

OpenAI, Meta, Microsoft, Google and others are pouring billions into infrastructure, with projects like OpenAI’s $500 billion Stargate initiative and Meta’s $29 billion capital raise for AI data centres.

Industry leaders, including Sam Altman of OpenAI, warned of an AI bubble. Yet many argue that, unlike the dot-com era, today’s market is anchored by Nvidia and OpenAI, whose products form the backbone of AI development.

The challenge for tech giants will be finding ways to recover vast investments while sustaining rapid growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What a Hollywood AI actor can teach CEOs about the future of work

Tilly Norwood, a fully AI-created actor, has become the centre of a heated debate in Hollywood after her creator revealed that talent agents were interested in representing her.

The actors’ union responded swiftly, warning that Tilly was trained on the work of countless performers without their consent or compensation. It also reminded producers that hiring her would involve dealing with the union.

The episode highlights two key lessons for business leaders in any industry. First, never assume a technology’s current limitations will remain its inherent limitations. Some commentators, including Whoopi Goldberg, have argued that AI actors pose little threat because their physical movements still appear noticeably artificial.

Yet history shows that early limitations often disappear over time. Once-dismissed technologies like machine translation and chess software have since far surpassed human abilities. Similarly, AI-generated performers may eventually become indistinguishable from human actors.

The second lesson concerns human behaviour. People are often irrational; their preferences can upend even the most carefully planned strategies. Producers avoided publicising actors’ names in Hollywood’s early years to maintain control.

Audiences, however, demanded to know everything about the stars they admired, forcing studios to adapt. This human attachment created the star system that shaped the industry. Whether audiences will embrace AI performers like Tilly remains uncertain, but cultural and emotional factors will play a decisive role.

Hollywood offers a high-profile glimpse of the challenges and opportunities of advanced AI. As other sectors face similar disruptions, business leaders may find that technology alone does not determine outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Weekly #232 The rise of AI slop: When social media turns more artificial

 Logo, Text

26 September – 3 October 2025


HIGHLIGHT OF THE WEEK

The rise of AI slop: When social media turns more artificial

Last Thursday, Meta quietly introduced Vibes, a new short-form video feed in the Meta AI app, wholly powered by AI. Rather than spotlighting real creators or grassroots content, the feed is built around synthetic content. 

This Tuesday, OpenAI revealed Sora, a companion app centred on AI-created short videos, complete with ‘cameo’ features letting people insert their own faces (with permission) into generative scenes.

From the outside, both Vibes and Sora look like competitive copies of TikTok or Reels — only their entire content pipeline is synthetic. 

They are the first dedicated firehoses of what has been officially termed ‘AI slop.’ This phrase, added to the Cambridge Dictionary in July 2025 and defined as ‘content on the internet that is of very low quality, especially when it is created by AI,’ perfectly captures the core concern.

Across the tech world, reactions ranged from bemused to alarmed. Because while launching a new social media product is hardly radical, creating a platform whose entire video ecosystem is synthetic — devoid of human spark — is something else entirely. 

Why is it concerning? Because it blurs the line between real and fake, making it hard to trust what you see. It can copy creators’ work without permission and flood feeds with shallow, meaningless videos that grab attention but add little value. Algorithms exploit user preferences, while features like synthetic cameos can be misused for bullying or identity abuse. And then there’s also the fact that AI clips typically lack human stories and emotion, eroding authenticity.

What’s next? Ultimately, this shift to AI-generated content raises a philosophical question: What is the purpose of our shared digital spaces?

As we move forward, perhaps we need to approach this new landscape more thoughtfully — embracing innovation where it serves us, but always making space for the authentic, the original, and the human.

For now, Vibes and Sorra have not yet been rolled out worldwide. Given the tepid response from early adopters, their success is far from guaranteed. Ultimately, their fate hinges entirely on the extent to which people will use them.

 Book, Comics, Publication, Person, Face, Head

IN OTHER NEWS THIS WEEK

UNGA80 turns spotlight on digital issues and AI governance

In our previous newsletter, published on Friday, we covered all developments at the UNGA80 up to that day. In this edition, we bring you everything that unfolded from Friday through Monday.

On Friday, AI governance, digital cooperation, and the critical issue of child safety in the digital space stood out in the statements. Member states underlined that the transformative potential of AI for development – from the green energy transition to improved public services – is inextricably linked to the urgent need for global governance. Several leaders welcomed the new AI mechanisms established by UNGA, while others called for new frameworks to manage risks, particularly those related to cybercrime, disinformation, and the mental health of youth. A recurring theme was the need to actively address the digital divide through investments in digital infrastructure, skills, and technology transfer, stressing that the benefits of this new era must be shared fairly with all. The discussions reinforced the message that tackling these complex, interconnected challenges requires mature multilateralism and reinforced international cooperation.

On Saturday, several statements highlighted the importance of d harnessing AI and digital technologies for development, security, and inclusive growth. Delegates emphasised responsible AI governance, ethical frameworks, and international norms to manage risks, including in military applications. The need for equitable access to AI, digital literacy, and capacity building for developing countries was highlighted to bridge technological and social divides. Participants also addressed cybersecurity, disinformation, and the influence of global tech corporations, emphasising the importance of multilateral cooperation and human-centric approaches. One echoing message was that leveraging AI and digital innovation responsibly can drive sustainable development, economic autonomy, and long-term prosperity for all.

On Monday, the transformative potential and urgent risks associated with AI continued to be highlighted. While AI can boost development, health, education, and productivity – especially in least developed countries – it must be governed responsibly to prevent inequality, bias, and insecurity. Calls for a global AI framework were echoed in various statements, alongside broader appeals for inclusive digital cooperation, accelerated technology transfer, and investment in infrastructure, literacy, and talent development. Speakers warned that digital disruption is deepening geopolitical divides, with smaller and developing nations demanding a voice in shaping emerging governance regimes. Bridging the digital divide, advancing secure and rights-based technologies, and protecting against cybercrime were framed as essential.

The bigger picture: A comprehensive coverage of UNGA80 can be found on our dedicated web page.


Chips and sovereignty: From globalisation to guarded autonomy

The global race for semiconductor dominance is accelerating, with both the EU and Taiwan asserting tighter control over their technological assets in response to growing US pressure.

EU member states have called for a revised and more assertive EU Chips Act, arguing that Europe must treat semiconductors as a strategic industry on par with aerospace and defence. The signatories — representing all 27 EU economies — warn that while competitors like the US and Asia are rapidly scaling public investment, Europe risks falling behind unless it strengthens its domestic ecosystem across R&D, design, manufacturing, and workforce development.

The proposed ‘second-phase Chips Act’ is built around three strategic objectives:

  • Prosperity, through a competitive and innovation-led semiconductor economy
  • Indispensability, by securing key control points in the value chain
  • Resilience, to guarantee supply for critical sectors during geopolitical shocks.

The EU’s message is clear: Europe intends not just to participate in the semiconductor industry, but to shape it on its own terms, backed by coordinated investment, industrial alliances, and international partnerships that reinforce — rather than dilute — strategic autonomy.

That same theme of sovereignty defines Taiwan’s position.

Amid negotiations with Taiwan, US Commerce Secretary Howard Lutnick floated a proposal that only half of America’s chips should be produced in Taiwan, relocating the other half to the USA, to reduce dependence on a single foreign supplier. But Taiwan’s Vice Premier Cheng Li-chiun dismissed the idea outright, stating that such terms were never part of formal talks and would not be accepted. 

While Taiwan is willing to deepen commercial ties with the US, it refuses to relinquish control over the advanced semiconductor capabilities that underpin its geopolitical leverage.

The bottom line: The age of supplier nations is over; The age of semiconductor sovereignty has begun. The message is the same on both sides of the Atlantic: chips are too critical to trust to someone else.


From code to court: xAI vs OpenAI and Apple

In the high-stakes arena of AI, a bitter rivalry is now unfolding in courtrooms.

Elon Musk’s AI venture xAI has launched an aggressive new lawsuit against OpenAI, accusing it of orchestrating a coordinated ‘poaching’ campaign to steal proprietary technology. xAI claims that OpenAI recruiters targeted engineers who then illicitly transferred source code, data-centre playbooks, and training methodologies to further OpenAI’s competitive edge. 

According to xAI, key incidents included employees uploading confidential files to personal devices, and repeated AirDrop transfers — behaviour that Musk’s company says amounts to trade secret misappropriation. Their remedy: damages, injunctions, and orders compelling OpenAI to purge models built on the contested materials. 

OpenAI, however, fired back. In court filings earlier this week, it asked a judge to dismiss xAI’s claims, calling them part of Musk’s ‘ongoing harassment’ of the company. OpenAI contends that xAI employees are free to leave and be hired elsewhere, and that xAI’s allegations are unsubstantiated. 

But the conflict doesn’t stop there.

This August, Musk had accused Apple of colluding with OpenAI to block competition — alleging that Apple disadvantaged the Grok chatbot (developed by xAI) in its App Store rankings precisely to favour OpenAI’s ChatGPT. 

Apple and OpenAI have responded together this week in court, asking a federal judge to dismiss this separate antitrust-style claim. Their defence is blunt: the agreement between Apple and OpenAI is explicitly non-exclusive, and Apple retains the freedom to work with other AI providers. Further, they argue, xAI has failed to plausibly show how embedding ChatGPT into Apple devices has harmed competition.

What’s behind all this? The ferocious race for talent, technological leadership, and market dominance in AI. We’ll see how it pans out.

LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head
SSF ITU

On 7–8 October 2025, the ITU will host the Space Sustainability Forum (SSF-25), gathering experts, regulators, industry leaders, and policy makers to address the long-term health, security and governance of outer space.

Swiss IGF

On 9 October 2025, policymakers, researchers, industry representatives, and civil society actors will gather at Welle 7 in Bern, in situ and online, for the Swiss Internet Governance Forum 2025.

SEEDIG 10

With the theme ‘A Decade of Dialogue and Cooperation: What’s Next?‘, the anniversary edition of the South Eastern European Dialogue on Internet Governance (SEEDIG) is designed as both a stocktaking exercise and a forward-looking consultation. 



READING CORNER
quantum ai data science and cybersecurity

Quantum internet is emerging not only as a scientific milestone but as a transformative force that could redefine how governments, healthcare systems, and citizens interact in the digital age.

UNGA

The annual General Debate at UNGA is the stage where countries outline their strategic priorities, concerns, and proposals. Overall, the sentiment of the General Debate can be distilled into three key words: echo, gloom, and hope.