ChatGPT caricature trend highlights risks of oversharing with AI

Experts say the ChatGPT caricature trend risks normalising oversharing with AI tools.

ChatGPT caricature trend showing AI-generated cartoon portraits with OpenAI logo highlighting privacy and data-sharing concerns

A viral ChatGPT caricature trend is spreading across social media, with users prompting the AI to generate illustrated versions of themselves and their jobs or hobbies. While the images appear playful, experts warn that the trend encourages people to share far more personal data than they realise.

Creating a ChatGPT caricature typically involves detailed prompts and, in some cases, photo uploads. Cybersecurity specialists caution that each interaction feeds highly personal information into generative models, which can analyse, store, and potentially reuse that data under broad platform policies.

Privacy researchers warn that the ChatGPT caricature trend risks normalising the sharing of personal and professional details with AI tools without sufficient scrutiny. Once images or descriptions are generated and shared online, they can be copied, reposted, or scraped beyond the user’s control.

OpenAI says users can manage memory and data settings in ChatGPT, including opting out of saved memories or using temporary chats. In the UK and the EU, memory features are switched off by default, though users must still actively review regional privacy policies.

Security experts advise limiting prompts, avoiding real photos, and excluding sensitive information. If content would not usually be shared publicly, they argue it should not be included in an AI prompt, even for a fleeting online trend.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!