YouTube introduces self-labeling feature for AI-generated content

The platform will rely on an honour system for creators to honestly disclose AI-generated content.

 Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware

YouTube introduced a new feature for creators to mark videos with AI-generated or synthetic content during upload. This requirement ensures transparency, prompting disclosure of realistic alterations like fabricated events or deepfake voices. However, beauty filters and animations are exempt from disclosure. 

The new feature operates on an honour system, relying on creators to disclose AI-generated content honestly. Yet, the platform reserved the right to label videos if creators fail to do so, ‘especially if the altered or synthetic content has the potential to confuse or mislead people.’

Following the November policy announcement for protecting music labels and artists, the procedure for an ordinary person featured in a deepfake video on the platform still lacks clarity. Removing such content can prove challenging as they must submit a privacy form request for YouTube to review.

The company stated that it is currently exploring tools to detect AI-generated content. Still, the accuracy of detection software could have been higher, adding a layer of uncertainty to the problem.

Why does it matter? 

This new feature is a step forward towards helping viewers understand the nature of the content they’re consuming. Other companies, such as Meta, have been introducing similar policies. Meta and YouTube’s parent company, Alphabet, joined a coalition of tech firms last month committed to tackling the spread of AI-generated misinformation in anticipation of the 2024 presidential election. These initiatives portray an industry-wide effort towards enhancing trust and accountability within the digital environment.