Google requires disclosure for election ads with altered content

The measure aims to combat misinformation, especially with the rise of generative AI and deepfakes.

 Electronics, Phone, Mobile Phone

Google announced that it will require advertisers to disclose election ads that use digitally altered content depicting real or realistic-looking people or events to combat misinformation during elections. This latest update to Google’s political content policy mandates advertisers to select a checkbox for ‘altered or synthetic content’ within their campaign settings.

The proliferation of generative AI, capable of rapidly creating text, images, and video, has sparked concerns over potential misuse. Deepfakes, which convincingly manipulate content to misrepresent individuals, have further blurred the distinction between fact and fiction in digital media.

To implement these changes, Google will automatically generate an in-ad disclosure for feeds and shorts on mobile devices and in-stream ads on computers and television. Advertisers must provide a prominently displayed disclosure for other ad formats that is clearly visible to users. According to Google, the exact wording of these disclosures will vary based on the context of each advertisement.

Why does it matter?

Earlier this year, during India’s general election, fake videos featuring Bollywood actors surfaced online, criticising Prime Minister Narendra Modi and urging support for the opposition Congress party. The incident highlighted the growing challenge of combating deceptive content amplified by AI-generated media.

In a related effort, OpenAI, led by Sam Altman, reported disrupting five covert influence operations in May that aimed to manipulate public opinion using AI models across various online platforms. Meta Platforms had previously committed to similar transparency measures, requiring advertisers on Facebook and Instagram to disclose the use of AI or digital tools in creating political, social, or election-related ads.