Meta under fire over AI deepfake celebrity chatbots
Reuters found Meta’s AI tools enabled deepfake chatbots of celebrities and minors, prompting the company to delete several bots before publication.

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.
The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.
The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.
A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.
Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.
California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.
The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!