Weekly #232 The rise of AI slop: When social media turns more artificial

 Logo, Text

26 September – 3 October 2025


HIGHLIGHT OF THE WEEK

The rise of AI slop: When social media turns more artificial

Last Thursday, Meta quietly introduced Vibes, a new short-form video feed in the Meta AI app, wholly powered by AI. Rather than spotlighting real creators or grassroots content, the feed is built around synthetic content. 

This Tuesday, OpenAI revealed Sora, a companion app centred on AI-created short videos, complete with ‘cameo’ features letting people insert their own faces (with permission) into generative scenes.

From the outside, both Vibes and Sora look like competitive copies of TikTok or Reels — only their entire content pipeline is synthetic. 

They are the first dedicated firehoses of what has been officially termed ‘AI slop.’ This phrase, added to the Cambridge Dictionary in July 2025 and defined as ‘content on the internet that is of very low quality, especially when it is created by AI,’ perfectly captures the core concern.

Across the tech world, reactions ranged from bemused to alarmed. Because while launching a new social media product is hardly radical, creating a platform whose entire video ecosystem is synthetic — devoid of human spark — is something else entirely. 

Why is it concerning? Because it blurs the line between real and fake, making it hard to trust what you see. It can copy creators’ work without permission and flood feeds with shallow, meaningless videos that grab attention but add little value. Algorithms exploit user preferences, while features like synthetic cameos can be misused for bullying or identity abuse. And then there’s also the fact that AI clips typically lack human stories and emotion, eroding authenticity.

What’s next? Ultimately, this shift to AI-generated content raises a philosophical question: What is the purpose of our shared digital spaces?

As we move forward, perhaps we need to approach this new landscape more thoughtfully — embracing innovation where it serves us, but always making space for the authentic, the original, and the human.

For now, Vibes and Sorra have not yet been rolled out worldwide. Given the tepid response from early adopters, their success is far from guaranteed. Ultimately, their fate hinges entirely on the extent to which people will use them.

 Book, Comics, Publication, Person, Face, Head

IN OTHER NEWS THIS WEEK

UNGA80 turns spotlight on digital issues and AI governance

In our previous newsletter, published on Friday, we covered all developments at the UNGA80 up to that day. In this edition, we bring you everything that unfolded from Friday through Monday.

On Friday, AI governance, digital cooperation, and the critical issue of child safety in the digital space stood out in the statements. Member states underlined that the transformative potential of AI for development – from the green energy transition to improved public services – is inextricably linked to the urgent need for global governance. Several leaders welcomed the new AI mechanisms established by UNGA, while others called for new frameworks to manage risks, particularly those related to cybercrime, disinformation, and the mental health of youth. A recurring theme was the need to actively address the digital divide through investments in digital infrastructure, skills, and technology transfer, stressing that the benefits of this new era must be shared fairly with all. The discussions reinforced the message that tackling these complex, interconnected challenges requires mature multilateralism and reinforced international cooperation.

On Saturday, several statements highlighted the importance of d harnessing AI and digital technologies for development, security, and inclusive growth. Delegates emphasised responsible AI governance, ethical frameworks, and international norms to manage risks, including in military applications. The need for equitable access to AI, digital literacy, and capacity building for developing countries was highlighted to bridge technological and social divides. Participants also addressed cybersecurity, disinformation, and the influence of global tech corporations, emphasising the importance of multilateral cooperation and human-centric approaches. One echoing message was that leveraging AI and digital innovation responsibly can drive sustainable development, economic autonomy, and long-term prosperity for all.

On Monday, the transformative potential and urgent risks associated with AI continued to be highlighted. While AI can boost development, health, education, and productivity – especially in least developed countries – it must be governed responsibly to prevent inequality, bias, and insecurity. Calls for a global AI framework were echoed in various statements, alongside broader appeals for inclusive digital cooperation, accelerated technology transfer, and investment in infrastructure, literacy, and talent development. Speakers warned that digital disruption is deepening geopolitical divides, with smaller and developing nations demanding a voice in shaping emerging governance regimes. Bridging the digital divide, advancing secure and rights-based technologies, and protecting against cybercrime were framed as essential.

The bigger picture: A comprehensive coverage of UNGA80 can be found on our dedicated web page.


Chips and sovereignty: From globalisation to guarded autonomy

The global race for semiconductor dominance is accelerating, with both the EU and Taiwan asserting tighter control over their technological assets in response to growing US pressure.

EU member states have called for a revised and more assertive EU Chips Act, arguing that Europe must treat semiconductors as a strategic industry on par with aerospace and defence. The signatories — representing all 27 EU economies — warn that while competitors like the US and Asia are rapidly scaling public investment, Europe risks falling behind unless it strengthens its domestic ecosystem across R&D, design, manufacturing, and workforce development.

The proposed ‘second-phase Chips Act’ is built around three strategic objectives:

  • Prosperity, through a competitive and innovation-led semiconductor economy
  • Indispensability, by securing key control points in the value chain
  • Resilience, to guarantee supply for critical sectors during geopolitical shocks.

The EU’s message is clear: Europe intends not just to participate in the semiconductor industry, but to shape it on its own terms, backed by coordinated investment, industrial alliances, and international partnerships that reinforce — rather than dilute — strategic autonomy.

That same theme of sovereignty defines Taiwan’s position.

Amid negotiations with Taiwan, US Commerce Secretary Howard Lutnick floated a proposal that only half of America’s chips should be produced in Taiwan, relocating the other half to the USA, to reduce dependence on a single foreign supplier. But Taiwan’s Vice Premier Cheng Li-chiun dismissed the idea outright, stating that such terms were never part of formal talks and would not be accepted. 

While Taiwan is willing to deepen commercial ties with the US, it refuses to relinquish control over the advanced semiconductor capabilities that underpin its geopolitical leverage.

The bottom line: The age of supplier nations is over; The age of semiconductor sovereignty has begun. The message is the same on both sides of the Atlantic: chips are too critical to trust to someone else.


From code to court: xAI vs OpenAI and Apple

In the high-stakes arena of AI, a bitter rivalry is now unfolding in courtrooms.

Elon Musk’s AI venture xAI has launched an aggressive new lawsuit against OpenAI, accusing it of orchestrating a coordinated ‘poaching’ campaign to steal proprietary technology. xAI claims that OpenAI recruiters targeted engineers who then illicitly transferred source code, data-centre playbooks, and training methodologies to further OpenAI’s competitive edge. 

According to xAI, key incidents included employees uploading confidential files to personal devices, and repeated AirDrop transfers — behaviour that Musk’s company says amounts to trade secret misappropriation. Their remedy: damages, injunctions, and orders compelling OpenAI to purge models built on the contested materials. 

OpenAI, however, fired back. In court filings earlier this week, it asked a judge to dismiss xAI’s claims, calling them part of Musk’s ‘ongoing harassment’ of the company. OpenAI contends that xAI employees are free to leave and be hired elsewhere, and that xAI’s allegations are unsubstantiated. 

But the conflict doesn’t stop there.

This August, Musk had accused Apple of colluding with OpenAI to block competition — alleging that Apple disadvantaged the Grok chatbot (developed by xAI) in its App Store rankings precisely to favour OpenAI’s ChatGPT. 

Apple and OpenAI have responded together this week in court, asking a federal judge to dismiss this separate antitrust-style claim. Their defence is blunt: the agreement between Apple and OpenAI is explicitly non-exclusive, and Apple retains the freedom to work with other AI providers. Further, they argue, xAI has failed to plausibly show how embedding ChatGPT into Apple devices has harmed competition.

What’s behind all this? The ferocious race for talent, technological leadership, and market dominance in AI. We’ll see how it pans out.

LOOKING AHEAD
 Art, Drawing, Person, Doodle, Face, Head
SSF ITU

On 7–8 October 2025, the ITU will host the Space Sustainability Forum (SSF-25), gathering experts, regulators, industry leaders, and policy makers to address the long-term health, security and governance of outer space.

Swiss IGF

On 9 October 2025, policymakers, researchers, industry representatives, and civil society actors will gather at Welle 7 in Bern, in situ and online, for the Swiss Internet Governance Forum 2025.

SEEDIG 10

With the theme ‘A Decade of Dialogue and Cooperation: What’s Next?‘, the anniversary edition of the South Eastern European Dialogue on Internet Governance (SEEDIG) is designed as both a stocktaking exercise and a forward-looking consultation. 



READING CORNER
quantum ai data science and cybersecurity

Quantum internet is emerging not only as a scientific milestone but as a transformative force that could redefine how governments, healthcare systems, and citizens interact in the digital age.

UNGA

The annual General Debate at UNGA is the stage where countries outline their strategic priorities, concerns, and proposals. Overall, the sentiment of the General Debate can be distilled into three key words: echo, gloom, and hope.

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Atlantic Quantum joins Google Quantum AI to advance scalable quantum hardware

Google Quantum AI has taken a major step in its pursuit of error-corrected quantum computing by integrating Atlantic Quantum, an MIT spin-out focused on superconducting hardware.

The move, while not formally labelled an acquisition, effectively brings the startup’s technology and talent into Google’s programme, strengthening its roadmap toward scalable quantum systems.

Atlantic Quantum, founded in 2021, has worked on integrating qubits with superconducting control electronics in the same cold stage.

A modular chip stack that promises to simplify design, reduce noise, and make scaling more efficient. Everything is equally important to build machines capable of solving problems beyond the reach of classical computers.

Google’s Hartmut Neven highlighted the approach as a way to accelerate progress toward large, fault-tolerant devices.

The startup’s journey, from MIT research labs to Google integration, has been rapid and marked by what CEO Bharath Kannan called ‘managed chaos’.

The founding team and investors were credited with pushing superconducting design forward despite the immense challenges of commercialising such cutting-edge technology.

Beyond hardware, Google gains a strong pool of engineers and researchers, enhancing its competitive edge in a field where rivals include IBM and several well-funded scale-ups.

A move that reflects a broader industry trend where research-heavy startups are increasingly folded into major technology firms to advance long-term quantum ambitions. With governments and corporations pouring resources into the race, consolidation is becoming common.

For Atlantic Quantum, joining Google ensures both technological momentum and access to resources needed for the next phase. As co-founder Simon Gustavsson put it, the work ‘does not stop here’ but continues within Google Quantum AI’s effort to deliver real-world quantum applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Comet AI browser is now free as Perplexity launches Comet Plus service

Perplexity has made its Comet AI browser available to everyone for free, widening access beyond its paid user base. The browser, launched three months ago for Max subscribers, introduces new tools designed to turn web browsing into an AI-driven task assistant.

The company describes Comet as a ‘browser for agentic search’, referring to autonomous software agents capable of handling multi-step tasks for users.

Free users can access the sidecar assistant alongside tools for shopping comparisons, travel planning, budgeting, sports updates, project management, and personalised recommendations.

Max subscribers gain early access to more advanced features, including a background assistant likened to a personal mission control dashboard. The tool can draft emails, book tickets, find flights, and integrate with apps on a user’s computer, running tasks in the background with minimal intervention.

Pro users also retain access to advanced AI models and media generation tools.

Perplexity is further introducing Comet Plus, a $5-per-month standalone subscription service that acts as an AI-powered alternative to Apple News. Current Pro and Max subscribers will receive the service automatically.

The move signals Perplexity’s ambition to expand its ecosystem while balancing free accessibility with premium AI features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global survey reveals slow AI adoption across the construction industry

RICS has published its 2025 report on AI in Construction, offering a global snapshot of how the built-environment sector views AI integration. The findings draw on over 2,200 survey responses from professionals across geography and disciplines.

The report finds that AI adoption remains limited: 45 percent of organisations report no AI use, and just under 12 percent say AI is used regularly in specific workflows. Fewer than 1 percent have AI embedded across multiple processes.

Preparedness is also low. While some firms are exploring AI, most have yet to move beyond early discussions. Only about 20 percent are engaged in strategic planning or proof-of-concept pilots, and very few have budgeted implementation roadmaps.

Despite this, confidence in AI is strong. Professionals see the most significant potential in progress monitoring, scheduling, resource optimisation, contract review and risk management. Over the next five years, many expect the most critical impact in design optioneering, where AI could help evaluate multiple alternatives in early project phases.

The survey also flags key barriers: lack of skilled personnel (46 percent), integration with existing systems (37 percent), data quality and availability (30 percent), and high implementation costs (29 percent).

To overcome these challenges, RICS recommends a coordinated roadmap with leadership from industry, government support, ethical guardrails, workforce upskilling, shared data standards and transparent pilot projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Few Americans rely on AI chatbots for news

A recent Pew Research survey shows that relatively few Americans use AI chatbots like ChatGPT to get news. About 2 percent say they often get news this way, and 7 percent say they do so sometimes.

The majority of US adults thus do not turn to AI chatbots as a regular news source, signalling a limited role for chatbots in news dissemination, at least for now.

However, this finding is part of a broader pattern: despite the growing usage of chatbots, news consumption via these tools remains in the niche. Pew’s data also shows that 34 percent of US adults report using ChatGPT, which has roughly doubled since 2023.

While AI chatbots are not yet mainstream for news, their limited uptake raises questions about trust, accuracy and the user motivation behind news consumption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use AI interactions for content and ad recommendations

Meta has announced that beginning 16 December 2025, it will start personalising content and ad recommendations on Facebook, Instagram and other apps using users’ interactions with its generative AI features.

The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.

Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.

If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Breakthrough platform gives warning of painful sickle cell attacks

A London-based health tech firm has developed an AI platform that can predict painful sickle cell crises before they occur. Sanius Health says its system forecasts vaso-occlusive crises with up to 92% sensitivity, offering patients and clinicians valuable lead time.

The technology combines biometric data from wearables with patient-reported outcomes and clinical records to generate daily risk scores. Patients and care teams receive alerts when thresholds are met, enabling early action to prevent hospitalisation.

In real-world studies involving nearly 400 patients, the AI system identified measurable changes in activity and sleep days before emergencies. Patients using the platform reported fewer admissions, shorter stays, and improved quality of life.

The World Health Organisation says sickle cell disease affects almost eight million people worldwide. Sanius Health is scaling its registry-driven model globally to ensure predictive care reaches patients from London to Lagos and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch AI actress ignites Hollywood backlash

An AI ‘actress’ created in the Netherlands has sparked controversy across the global film industry. Tilly Norwood, designed by Dutch actress Eline van der Velde, is capable of talking, waving, and crying, and is reportedly being pitched to talent agencies.

Hollywood unions and stars have voiced strong objections. US-based SAG-AFTRA said Norwood was trained on the work of professional actors without life experience or human emotion, warning that its use could undermine existing contracts.

Actresses Natasha Lyonne and Emily Blunt also criticised the Dutch project, with Lyonne calling for a boycott of agencies working with Norwood, and Blunt describing it as ‘really scary’.

Van der Velde defended her AI creation, describing Norwood as a piece of art rather than a replacement for performers. She argued the project should be judged as a new genre rather than compared directly to human actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot