Meta to implements measures to detect and label AI-generated images

Meta Platforms, the parent company of Facebook, Instagram, and Threads, will begin detecting and labeling images generated by other companies’ artificial intelligence services in the coming months.

Meta's logo Electronics, Mobile Phone, Phone, Person, Face, Head

Meta Platforms is set to implement a new strategy aimed at detecting and labelling images produced by external AI services, as revealed by the company. In an effort to enhance transparency, the company will add AI-generated labels to images created by tools from Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. The labels will roll out across Facebook, Instagram, and Threads in multiple languages.

In a blog post authored by Nick Clegg, the president of global affairs at Meta, it was emphasized that the labels would be applied to any content featuring these markers, serving as an indicator to users that the images, although resembling authentic photos in many instances, are actually digitally generated. This initiative aligns with Meta’s existing practice of labelling content produced using its in-house AI tools.

This announcement offers insight into a burgeoning framework of standards being developed by technology firms to address potential risks associated with generative AI technologies. Such technologies have the capability to produce counterfeit yet convincingly realistic content based on simple commands.

The proposed approach draws upon standards established by several companies over the past decade, facilitating coordinated efforts to remove prohibited content across various platforms. This collaborative framework has previously targeted content depicting mass violence and child exploitation.

In a related context, Google announced in November of the previous year that it is formulating a policy to guide content creators on the responsible use of synthetic content, particularly deepfakes, on platforms like YouTube. The policy focuses on creator disclosures and labelling for AI-generated content, with plans for disclaimers in video descriptions and within videos. While punitive measures for non-compliance were not detailed, existing policies involve suspending accounts and removing content that violates compliance guidelines.