ChatGPT caricature trend highlights risks of oversharing with AI
Experts say the ChatGPT caricature trend risks normalising oversharing with AI tools.
Creating a ChatGPT caricature typically involves detailed prompts and, in some cases, photo uploads. Cybersecurity specialists caution that each interaction feeds highly personal information into generative models, which can analyse, store, and potentially reuse that data under broad platform policies.
Privacy researchers warn that the ChatGPT caricature trend risks normalising the sharing of personal and professional details with AI tools without sufficient scrutiny. Once images or descriptions are generated and shared online, they can be copied, reposted, or scraped beyond the user’s control.
OpenAI says users can manage memory and data settings in ChatGPT, including opting out of saved memories or using temporary chats. In the UK and the EU, memory features are switched off by default, though users must still actively review regional privacy policies.
Security experts advise limiting prompts, avoiding real photos, and excluding sensitive information. If content would not usually be shared publicly, they argue it should not be included in an AI prompt, even for a fleeting online trend.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
