Grok chatbot leaks spark major AI privacy concerns
Thousands of Grok chats are now publicly searchable, revealing harmful queries and fuelling debate over chatbot safety and user trust.

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.
The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.
The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.
The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!