FTC opens inquiry into AI chatbots and child safety

Regulators are examining how AI chatbots monetise interactions, enforce age limits and protect minors, following lawsuits and growing fears about their psychological impact on young users.

The US Federal Trade Commission has begun an inquiry into AI chatbots, seeking answers from major firms about child safety risks and data privacy concerns.

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!