California moves to regulate AI companion chatbots to protect minors

California is set to become the first state in USA to regulate AI companion chatbots, with new legislation aimed at protecting minors from harmful content and holding companies legally accountable.

California's AI safety bill passes the legislature, now awaiting Governor Newsom's decision.

The California State Assembly passed SB 243, advancing legislation making the state the first in the USA to regulate AI companion chatbots. The bill, which aims to safeguard minors and vulnerable users, passed with bipartisan support and now heads to the state Senate for a final vote on Friday.

If signed into law by Governor Gavin Newsom, SB 243 would take effect on 1 January 2026. It would require companies like OpenAI, Replika, and Character.AI to implement safety protocols for AI systems that simulate human companionship.

The law would prohibit such chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. For minors, platforms must provide recurring alerts every three hours, reminding them they interact with AI and encouraging breaks.

The bill also introduces annual transparency and reporting requirements, effective 1 July 2027. Users harmed by violations could seek damages of up to $1,000 per incident, injunctive relief and attorney’s fees.

The legislation follows the suicide of teen Adam Raine after troubling conversations with ChatGPT, and amid mounting scrutiny of AI’s impact on children. Lawmakers nationwide and the Federal Trade Commission (FTC) are increasing pressure on AI companies to bolster safeguards in the USA.

Though earlier versions of the bill included stricter requirements, like banning addictive engagement tactics, those provisions were removed. Still, backers say the final bill strikes a necessary balance between innovation and public safety.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!