Meta found displaying explicit ‘AI Girlfriend’ ads, violating advertising policies

A Meta’s spokesperson stated that the company is working to remove the violating ads promptly and improve detection systems, though challenges persist.

 Flag, Text

Meta-owned social media platforms, including Facebook, Instagram, and Messenger, have reportedly displayed explicit ads for ‘AI girlfriends,’ violating the company’s advertising policies—an investigation by Wired uncovered over 29,000 instances of such ads in Meta’s ad library. They feature chatbots sending sexually suggestive messages and AI-generated images of women in provocative poses, often without the ‘NSFW’ (Not Safe for Work) label. These instances have raised concerns about user’s exposure to inappropriate content.

Despite prohibiting adult content in advertising, including nudity and sexually explicit activities, about half of the identified ads breached Meta’s policies. Ryan Daniels, a Meta spokesperson, stated that the company is working to remove these violating ads promptly and continuously improving detection systems. However, he acknowledged various attempts to circumvent their current policies and detection methods.

Why does it matter?

Sex workers, sex educators, LGBTQ users, and erotic artists have long claimed that Meta unfairly targets their content, as reported by Mashable. They argue that Instagram shadowbans LGBTQ and sex educator accounts, while WhatsApp bans sex worker accounts.

Another controversial incident occurred last November when Mashable reported that Meta rejected a period care ad as ‘adult or political.’ Meanwhile, NSFW ‘AI girlfriend’ ads appear to be slipping through Meta’s advertising policies, sparking discussions about selective enforcement.