YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Smart TV viewing upgraded with YouTube AI feature

YouTube has expanded its conversational AI tool to smart TVs, marking a significant step in making home viewing more interactive. Viewers can now engage with content directly from their television screens using voice-enabled queries.

Access to the feature is simple. While watching a video, users can select the ‘Ask’ option and activate their remote’s microphone button to interact with the AI. Users can ask about similar content or a creator’s catalogue in real time, with prompts available to guide new users.

Initial rollout of the tool took place last year across mobile and web platforms, where it quickly became a practical companion for deeper content engagement. Users already use it to analyse podcasts, explore destinations, and understand content without pausing videos.

Expansion to smart TVs strengthens YouTube’s push to transform passive viewing into an interactive experience. Living room entertainment is increasingly shaped by AI-driven features, with real-time assistance now integrated directly into the home’s largest screen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conversational AI comes to YouTube TV

YouTube is testing its conversational AI feature on smart TVs, gaming consoles, and streaming devices. The tool, previously available on mobile and desktop, appears as an Ask button marked with a Gemini sparkle icon.

The feature allows viewers to ask questions about videos, request summaries, receive related content suggestions, and select from prompts displayed on screen. Users can press the microphone button on their remote to interact with the AI while watching.

Currently, the tool is available to a limited group of users, on select videos, and supports English, Hindi, Spanish, Portuguese, and Korean. YouTube has not revealed when it will expand access to more users or regions.

By bringing conversational AI to TVs, YouTube aims to make viewing more interactive. Fans can now get answers or clarifications directly on the big screen without needing a phone or computer.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI playlist creator comes to Youtube for Premium subscribers

YouTube has introduced a new AI Playlist feature for YouTube Premium and YouTube Music Premium subscribers on Android and iOS, enabling users to generate customised music playlists by describing a mood, genre, activity or vibe in natural language.

From the Library tab, users can tap ‘New,’ select ‘AI playlist’, and enter text or voice prompts, such as ‘sad post-rock’ or ’90s classic hits,’ to instantly build a curated list of tracks.

The rollout builds on YouTube’s earlier AI experiments in music discovery and positions the company alongside other streaming services like Spotify, Amazon Music and Deezer, which have launched similar generative playlist tools.

The feature reflects a broader trend of streaming platforms embedding generative AI to personalise discovery and enhance user engagement for paying subscribers.

Details such as the degree of user control over generated playlists and support for iterative refinement remain limited, and YouTube has not clarified how often playlists can be refreshed or edited after creation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Darren Aronofsky and Google DeepMind reimagine the American Revolution with AI

Director Darren Aronofsky’s creative studio, Primordial Soup, has released the first episodes of On This Day… 1776, a short-form animated series that uses generative AI technology from Google DeepMind to visualise pivotal events from the American Revolution ahead of the 250th anniversary of the Declaration of Independence.

Episodes are published weekly on TIME’s YouTube channel throughout 2026, with each one focusing on a specific date in 1776.

The project combines AI-generated visuals with traditional post-production elements, including colour grading and voice performances by SAG-AFTRA actors, to expand narrative possibilities while retaining human creative input.

Aronofsky and collaborators describe the series as an example of how thoughtful, artist-led AI use can enhance storytelling rather than replace artistic craft.

The initiative is part of a broader trend in entertainment where AI tools are being explored as creative accelerators, though reactions have been mixed on social media, with some viewers questioning the quality and artistic decisions in early episodes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Overviews leans heavily on YouTube for health information

Google’s health-related search results increasingly draw on YouTube rather than hospitals, government agencies, or academic institutions, as new research reveals how AI Overviews select citation sources in automated results.

An analysis by SEO platform SE Ranking reviewed more than 50,000 German-language health queries and found AI Overviews appeared on over 82% of searches, making healthcare one of the most AI-influenced information categories on Google.

Across all cited sources, YouTube ranked first by a wide margin, accounting for more than 20,000 references and surpassing medical publishers, hospital websites, and public health authorities.

Academic journals and research institutions accounted for less than 1% of citations, while national and international government health bodies accounted for under 0.5%, highlighting a sharp imbalance in source authority.

Researchers warn that when platform-scale content outweighs evidence-based medical sources, the risk extends beyond misinformation to long-term erosion of trust in AI-powered search systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!