Australian regulator warns AI companions expose children to serious online risks
AI companion services in Australia are under pressure to improve safety standards for children.
The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.
According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.
The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.
eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.
She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.
The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.
It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.
The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.
These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
