Meta to label AI-generated content instead of removing it

The direction change follows criticism from the Oversight Board, which recommended labelling AI-generated content instead of removing it.

Meta's logo Electronics, Mobile Phone, Phone, Person, Face, Head

Meta Platforms Inc., the parent company of Facebook and Instagram, has announced changes to its content policies regarding AI-generated content. Under the new policy, Meta will no longer remove misleading AI-generated content but will instead label it to provide transparency. This shift in approach aims to address concerns about misleading content without outright removal.

Previously, Meta’s policy targeted ‘manipulated media’ that could mislead viewers into thinking someone in a video said something they did not. Now, the content policy extends to digitally altered images, videos, or audio as the company will employ fact-checking and labelling to inform users about the nature of the content they encounter on its platforms.

The policy was revised in February after Meta’s Oversight Board criticised the previous approach as ‘incoherent’. The board recommended using labels instead of removal for AI-generated content, and Meta has agreed with this perspective, emphasising the importance of transparency and additional context in handling such content.

Why does it matter?

Starting in May, AI-generated Meta-platform content will be labelled ‘Made with AI’ to indicate its origin. This policy change is particularly significant given the upcoming US elections, with Meta acknowledging the need for clear labelling of AI-generated posts, including those created using competitors’ technology.

Meta’s shift in content moderation policy reflects a broader trend toward transparency in dealing with AI-generated content across social media platforms. By implementing labelling instead of removal, Meta aims to provide users with more information about the nature of the online content.