Defamation risks emerge in Google AI chatbot lawsuit

A lawsuit against Google highlights how defamation law applies to AI chatbots, automated systems, and erroneous generated information.

Defamation case involving Google AI chatbot illustrated with gavel and legal books

In October 2025, social media activist Robby Starbuck sued Google after its AI chatbot generated false claims about him, including fabricated criminal records and court documents. The case has renewed attention on defamation risks linked to automated systems that generate information about individuals.

Google has sought dismissal, arguing that the chatbot did not legally publish the statements for the purposes of the libel. The company also claims that no identifiable audience relied on the output and that users were warned about possible inaccuracies in AI-generated responses.

The defence further relies on Starbuck’s status as a public figure. Under US defamation law, public figures must prove actual malice. Google argues that this standard cannot apply to automated systems that lack intent or awareness when producing content.

Critics say the lawsuit exposes a structural weakness in how slander and libel law handles AI. Large language models are trained on vast datasets with limited provenance, making it difficult to trace how defamatory claims emerge or to verify the information used to generate them.

Legal scholars argue that defamation frameworks may need to evolve. Drawing parallels with US credit reporting rules, they suggest shifting responsibility for accuracy, traceability, and correction. This would treat defamation in AI systems as a systemic risk rather than intent-based conduct.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!