US lawmakers urge Facebook and X to act on AI-generated political ads

In a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, lawmakers expressed their worries about the potential impact of these AI-generated ads on the integrity of free and fair elections as the 2024 US presidential race draws nearer.

First Aid, Text

In response to the surge in AI-generated deepfakes, two Democratic members of Congress have written a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, expressing serious concerns regarding the emergence of AI-generated political ads on social media platforms.

With the 2024 elections looming, the lack of transparency regarding such content in political ads could lead to a dangerous influx of election-related misinformation, warn the lawmakers. They have urged these tech giants to clarify any measures they are developing to mitigate free and fair elections risks.

While X and Meta have remained silent so far, the pressure on these platforms comes as part of a broader effort to regulate AI-generated political ads. The proposed legislation would require labels on election ads featuring AI-generated images or video. Yet, the debate persists over AI regulation, with some advocating for disclaimers like Google’s while others are cautious about limiting free speech.

Why does it matter?

As the 2024 US election approaches, there’s growing concern about AI’s role in spreading misinformation, primarily through AI-generated political ads. Precautionary steps are being taken, such as Google’s requirement for political ads to disclose AI-generated content, TikTok’s tools to combat misinformation, and a nonprofit organization, AIandYou, launching a voter-focused campaign on AI’s election impact. This concern is evident internationally, with Switzerland’s five political parties pledging to limit AI use in the forthcoming federal elections on October 22.