Chatbots under scrutiny in China over AI ‘boyfriend’ and ‘girlfriend’ services
Draft regulations target AI companion chatbots as China tightens oversight of emotionally responsive artificial intelligence services.
China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.
Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.
The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.
The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.
Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
