YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU updates voluntary code for labelling AI-generated content

The European Commission has released a second draft of its voluntary Code of Practice on marking and labelling AI-generated content, designed to support compliance with transparency rules under the Artificial Intelligence Act.

Published on 5 March, the updated draft reflects feedback from hundreds of stakeholders, including industry groups, academic researchers, policymakers, and civil society organisations.

Revisions follow consultations held in early 2026 as part of the broader rollout of the EU’s AI regulatory framework.

The proposed code outlines technical approaches for identifying AI-generated material. A two-layered system using secure metadata and digital watermarking is recommended, with optional fingerprinting, logging, and verification to improve detection.

Guidelines also address how platforms and publishers should label deepfakes and AI-generated text related to matters of public interest. Public feedback is open until 30 March, with the final code expected in early June before transparency rules take effect on 2 August 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US freedom.gov and the EU’s DSA in a transatlantic fight over online speech

The transatlantic debate over ‘digital sovereignty’ is also, in a discrete measure, about whose rules govern online speech. In the EU, digital sovereignty has essentially meant building enforceable guardrails for platforms, especially around illegal content, systemic risks, and transparency, through instruments such as the Digital Services Act (DSA) and its transparency mechanisms for content moderation decisions. In Washington, the emphasis has been shifting toward ‘free speech diplomacy‘, framing some EU online-safety measures as de facto censorship that spills across borders when US-based platforms comply with the EU requirements.

What is ‘freedom.gov’?

The newest flashpoint is a reported US State Department plan to develop an online portal, widely described as ‘freedom.gov‘, intended to help users in the EU and elsewhere access content blocked under local rules, and it aligns with the Trump administration policy and a State Department programme called Internet Freedom. The ‘freedom.gov’ plan reportedly includes adding VPN-like functionality so traffic would appear to originate in the US, effectively sidestepping geographic enforcement of content restrictions. According to the US House of Representatives’ legal framework, the idea could be seen as a digital-rights tool, but experts warn it would export a US free-speech standard into jurisdictions that regulate hate speech and extremist material more tightly.

The ‘freedom.gov’ portal story occurs within a broader escalation that has already moved from rhetoric to sanctions. In late 2025, the US imposed visa bans on several EU figures it accused of pressuring platforms to suppress ‘American viewpoints,’ a move the EU governments and officials condemned as unjustified and politically coercive. The episode brought to the conclusion that Washington is treating some foreign content-governance actions not as domestic regulation, but as a challenge to US speech norms and US technology firms.

The EU legal perspective

From the EU perspective, this framing misses the point of to DSA. The Commission argues that the DSA is about platform accountability, requiring large platforms to assess and mitigate systemic risks, explain moderation decisions, and provide users with avenues to appeal. The EU has also built new transparency infrastructure, such as the DSA Transparency Database, to make moderation decisions more visible and auditable. Civil-society groups broadly supportive of the DSA stress that it targets illegal content and opaque algorithmic amplification; critics, especially in US policy circles, argue that compliance burdens fall disproportionately on major US platforms and can chill lawful speech through risk-averse moderation.

That’s where the two sides’ risk models diverge most sharply. The EU rules are shaped by the view that disinformation, hate speech, and extremist propaganda can create systemic harms that platforms must proactively reduce. On the other side, the US critics counter that ‘harm’ categories can expand into viewpoint policing, and that tools like a government-backed portal or VPN could be portrayed as restoring access to lawful expression. Yet the same reporting that casts the portal as a speech workaround also notes it may facilitate access to content the EU considers dangerous, raising questions about whether the initiative is rights-protective ‘diplomacy,’ a geopolitical pressure tactic, or something closer to state-enabled circumvention.

Why does it matter?

The dispute has gone from theoretical to practical, reshaping digital alliances, compliance strategies, and even travel rights for policy actors, not to mention digital sovereignty in the governance of online discourse and data. The EU’s approach is to make platforms responsible for systemic online risks through enforceable transparency and risk-reduction duties, while the US approach is increasingly to contest those duties as censorship with extraterritorial effects, using instruments ranging from public messaging to visa restrictions, and, potentially, state-backed bypass tools.

What could we expect then, if not a more fragmented internet, with platforms pulled between competing legal expectations, users encountering different speech environments by region, and governments treating content policy as an extension of foreign policy, complete with retaliation, countermeasures, and escalating mistrust?

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gabon imposes indefinite social media shutdown over national security concerns

Gabon’s media regulator, the High Authority for Communication (HAC), has announced a nationwide open-ended suspension of social media, citing online content that it says is fueling tensions and undermining social cohesion. In a statement, the HAC framed the move as a response to material it described as defamatory or hateful and, in some cases, a threat to national security, telling telecom operators and internet service providers to block access to major platforms.

The regulator pointed to what it called a rise in coordinated cyberbullying and the unauthorised sharing of personal data, saying existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.

The announcement arrives amid mounting labour pressure. Teachers began a high-profile strike in December 2025 over pay, status and working conditions, and the dispute has become one of the most visible signs of broader public-sector discontent. At the same time, the economic stakes are significant: Gabon had an estimated 850,000 active social media users in late 2025 (around a third of the population), and platforms are widely used for marketing and small-business sales.

Why does it matter?

Governments increasingly treat social media suspensions as a rapid-response tool for ‘public order’, but they also reshape information access, civic debate and commerce, especially in countries where mobile apps are a primary channel for news and income. The current announcement comes at a politically sensitive moment, since Gabon has a precedent here: during the 2023 election period, authorities shut down internet access, citing the need to counter calls for violence and misinformation. Gabon is still in transition after the August 2023 coup, and President Brice Oligui Nguema, who led the takeover, won the subsequent presidential election by a landslide in 2025, consolidating power while facing rising expectations for reform and stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Musk’s X under EU Commission scrutiny over Grok sexualised images

The European Commission has opened a new investigation into Elon Musk’s X over Grok, the platform’s AI chatbot, after reports that the tool was used to generate and circulate non-consensual sexualised images, including content that may involve minors. The EU officials say they will examine whether X properly assessed and reduced the risks linked to Grok’s features before rolling them out in the EU.

The case is being pursued under the EU’s Digital Services Act (DSA), which requires very large online platforms to identify and mitigate systemic risks, including the spread of illegal content and harms to fundamental rights. If breaches are confirmed, the Commission can impose fines of up to 6% of a provider’s global annual turnover and, in some cases, require interim measures.

X and xAI have said they introduced restrictions after the backlash, including limiting some image-editing functions and blocking certain image generation in jurisdictions where it is illegal. The EU officials have welcomed steps to tighten safeguards but argue they may not address deeper, systemic risks, particularly if risk assessments and mitigations were not in place before deployment.

The Grok probe lands on top of a broader set of legal pressures already facing X. In the UK, Ofcom has opened a formal investigation under the Online Safety Act into whether X met its duties to protect users from illegal content linked to Grok’s sexualised imagery. Beyond Europe, Malaysia and Indonesia temporarily blocked Grok amid safety concerns, and access was later restored after authorities said additional safeguards had been put in place.

In parallel, the EU regulators have also widened scrutiny of X’s recommender systems, an area already under DSA proceedings, because the platform has moved toward using a Grok-linked system to rank and recommend content. The Commission has argued that recommendation design can amplify harmful material at scale, making it central to whether a platform effectively manages systemic risks.

The investigation also comes amid earlier DSA enforcement. The Commission recently fined X €120 million for transparency-related breaches, underscoring that the EU action is not limited to content moderation alone but extends to how platforms disclose and enable scrutiny of their systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Musk’s Grok under fire over ‘nudify’ image edits

Grok, the AI chatbot built into Elon Musk’s social platform X, has been used to produce sexualised ‘edited’ images of real people, including material that appeared to involve children. In a statement cited in the report, Grok attributed some of the outputs to gaps in its safeguards that allowed images showing ‘minors in minimal clothing,’ and said changes were being made to prevent repeat incidents.

One case described a Rio de Janeiro musician, Julie Yukari, who posted a New Year’s Eve photo on X and then noticed other users tagging Grok with requests to alter her image into a bikini-style version. She said she assumed the bot would refuse, but AI-generated, near-nude edits of her image later spread on the platform.

The report suggested that the misuse was widespread and rapidly evolving. In a brief midday snapshot of public prompts, it counted more than 100 attempts in 10 minutes to get Grok to swap people’s clothing for bikinis or more revealing outfits. In dozens of cases, the tool complied wholly or partly, including instances involving people who appeared to be minors.

The episode has also drawn attention from officials outside the US. French ministers said they referred the content to prosecutors and also flagged it to the country’s media regulator, asking for an assessment under the EU’s Digital Services Act. India’s IT ministry, meanwhile, wrote to X’s local operation saying the platform had failed to stop the tool being used to generate and circulate obscene, sexually explicit material.

Specialists quoted in the report argued the backlash was predictable: ‘nudification’ tools have existed for years, but placing a powerful image editor inside a significant social network drastically lowers the effort needed to misuse it and helps harmful content spread. They said civil-society and child-safety groups had warned xAI about likely abuse, while Musk reacted online with joking posts about bikini-style AI edits, and xAI previously brushed off related coverage with the phrase ‘Legacy Media Lies.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New copyright settings announced for Sora 2 video generation

OpenAI has announced it will give copyright holders more control over how their intellectual property is used in videos produced by Sora 2. The shift comes amid criticism over Sora’s ability to generate scenes featuring popular characters and media, sometimes without permission.

At launch, Sora allowed generation under a default policy that required rights holders to opt out if they did not want their content used. That approach drew immediate backlash from studios and creators complaining about unauthorised use of copyrighted characters.

OpenAI now says it will introduce ‘more granular control’ for content owners, letting them set parameters for how their work can appear, or choose complete exclusion. The company has also hinted at monetisation features, such as revenue sharing for approved usage of copyrighted content.

CEO Sam Altman acknowledged that feedback from studios, artists and other stakeholders influenced the change. He emphasised that the new content policy would treat fictional characters more cautiously and make character generation opt-in rather than default.

Still unresolved is how precisely the system will work, especially around the enforcement, blocking, or filtering of unauthorised uses. OpenAI has repeatedly framed the updates as evolutionary, acknowledging that design and policy missteps may occur.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to use AI interactions for content and ad recommendations

Meta has announced that beginning 16 December 2025, it will start personalising content and ad recommendations on Facebook, Instagram and other apps using users’ interactions with its generative AI features.

The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.

Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.

If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China cracks down on Kuaishou and Weibo over alleged online content violations

China’s internet watchdog, the Cyberspace Administration of China (CAC), has warned online platforms Kuaishou Technology and Weibo for failing to curb celebrity gossip and harmful content on their platforms.

The CAC issued formal warnings, citing damage to the ‘online ecosystem’ and demanding corrective action. Both firms pledged compliance, with Kuaishou forming a task force and Weibo promising self-reflection.

The move follows similar disciplinary action against lifestyle app RedNote and is part of a broader two-month campaign targeting content that ‘viciously stimulates negative emotions.’

Separately, Kuaishou is under investigation by the State Administration for Market Regulation for alleged malpractice in live-streaming e-commerce.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DW Weekly #207 – China disagrees with Trump over $54B TikTok deal due to tariffs rise

 Logo, Text

6 – 14 April 2025


 People, Person, Crowd, Face, Head, Audience

Dear readers,

Last week, we saw the TikTok saga unfold as the Chinese government has not agreed to sell the ByteDance daughter company to a US majority TikTok entity, so US President Donald Trump extended the deadline to find a non-Chinese buyer by another 75 days, pushing the cutoff to mid-June after a near-miss on 5 April.

Amid the tariff rise turmoil, President Donald Trump’s administration has granted exemptions from steep tariffs on smartphones, laptops, and other electronics, relieving tech giants like Apple and Dell. 

The cryptocurrency landscape was waved by a blockchain analytics firm, which has alleged that the team behind the Melania Meme (MELANIA) cryptocurrency moved $30 million worth of tokens, allegedly taken from community reserves without explanation.

In the ever-evolving world of AI, two leading AI systems, OpenAI’s GPT-4.5 and Meta’s Llama-3.1, have passed a key milestone by outperforming humans in a modern version of the Turing Test. 

On the cybersecurity stage, Oracle Health has reportedly suffered a data breach that compromised sensitive patient information stored by US hospitals.

The European Union has firmly ruled out dismantling its strict digital regulations in a bid to secure a trade deal with Donald Trump. Henna Virkkunen, the EU’s top official for digital policy, said the bloc remained fully committed to its digital rulebook instead of relaxing its standards to satisfy US demands.

Meta’s existence is threatened by a colossal antitrust trial which commenced in Washington, with the US Federal Trade Commission (FTC) arguing that the company’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were designed to crush competition with monopoly aims instead of fostering innovation.

Elon Musk’s legal saga with OpenAI intensifies, as OpenAI has filed a countersuit accusing the billionaire entrepreneur of a sustained campaign of harassment intended to damage the company and regain control over its AI developments.

For the main updates and reflections, consult the Radar and Reading Corner below.

DW Team


RADAR

Highlights from the week of 6 – 14 April 2025

meta brazil hate speech policy

Wynn-Williams says Meta executives prioritised business growth in China over national security.

Algorithms confront tariffs featured image

The Nasdaq jumped over 12%, its best day in decades, following a temporary halt on trade tariffs by the Trump administration.

deepseek AI China research innovation

Data stored today could be vulnerable to decryption in the near future.

instagram 5409107 1280

Instagram users under 16 won’t be able to livestream or view blurred nudity in messages unless approved by a parent, Meta announced.

openAI Sam Altman TED 2025 ChatGPT users

OpenAI is developing agents that can act autonomously on behalf of users, with safeguards.

electricity 4666566 1280

Energy connection delays face AI-powered fix through Google’s new initiative.

google 959059 1280

The 71% discount on Google Workspace is part of a cost-cutting initiative under President Trump’s government reform, targeting federal spending efficiency.

japan 1184122 1280

A discussion paper on crypto regulation in Japan highlights issues like market access, insider trading, and classification of assets into funding and non-funding categories.

building 1011876 1280

As AI demand shifts, Microsoft has slowed down major data centre projects, including the one in Ohio, and plans to invest $80 billion in AI infrastructure this year.


READING CORNER
navigating the ai maze featured image

With over 10,000 AI applications available, selecting the right AI tool can be daunting. Diplo advocates starting with a ‘good enough’ tool to avoid paralysis by analysis, tailoring it to specific needs through practical use.

BLOG featured image 2025 54

International Geneva faces significant challenges, including financial constraints, waning multilateralism, and escalating geopolitical tensions. To remain relevant, it must embrace transformative changes, particularly through Artificial Intelligence (AI).

1524167e 54ef 4a3f a7f3 00814510c175

Founded by Bill Gates and Paul Allen in 1975, Microsoft grew from a small startup into the world’s largest software company. Through strategic acquisitions, the company expanded into diverse sectors,…

650 312 max 1

Do ideas have origins? From medieval communes to WWI, Aldo Matteucci shows how political thought, like a river, is shaped by experience, institutions, and historical context — not just theory.

UPCOMING EVENTS
gitex africa
www.diplomacy.edu

GITEX Africa 2025 Jovan Kurbalija will participate at GITEX Africa (14-16 April 2025 in Marrakech, Morocco).

Geneva Internet Platform
www.diplomacy.edu

Tech attache briefing: WSIS+20 and AI governance negotiations – Updates and next steps. The event is part of a series of regular briefings the Geneva

 Internet Platform (GIP) is delivering for diplomats at permanent missions and delegations in Geneva following digital policy issues. It is an invitation-only event.
geneva human rights platform
23 April 2025
The event will provide a timely discussion on methods, approaches, and solutions for AI transformation of International Organisaitons. 
WIPO
dig.watch

WIPO’s 11th Conversation on IP and AI will take place on April 23-24, 2025, focusing on the role of copyright infrastructure in supporting both rights holders and AI-driven innovation. As…