Judge rules Google must face chatbot lawsuit
The lawsuit alleges an AI chatbot manipulated a vulnerable teen by acting as a therapist and romantic partner before he died by suicide in February 2024.

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.
US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.
The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.
The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.
Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.
The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.
Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.
Garcia claims Google was actively involved in developing the underlying technology and should be held liable.
The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.
Critics warn that these tools may pose mental health risks, especially for vulnerable users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!