Australia demands answers from AI chatbot providers over child safety

Character.ai, Nomi, Chai, and Chub.ai must comply with Australia’s Online Safety Act or risk penalties of up to $825,000 per day.

Australia’s eSafety Commissioner has ordered AI chatbot companies to reveal how they protect children from explicit and harmful content or face daily fines.

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!