AI chatbots spreading rumours raise new risks

Experts caution that AI systems prioritise fluency over verification, making rumours hard to detect, correct, or control once they circulate between machines.

AI chatbots can exchange and escalate negative claims about individuals, creating persistent misinformation that spreads unchecked across networks, researchers warn.

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!