Hidden privacy risk: Meta AI app may make sensitive chats public
Users of Meta’s AI app are unknowingly making private chats public due to hidden settings and vague warnings.

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.
The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.
Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.
Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.
Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.
Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.
While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.
Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!