California introduces first AI chatbot safety law
The new law requires safety checks, crisis protocols, and clear AI disclaimers on companion platforms.
California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.
The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.
Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.
Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!