OpenAI establishes team to study child safety

This initiative by OpenAI comes amid growing concerns about minors using AI tools, with the company also adhering to regulations like the U.S. Children’s Online Privacy Protection Rule, which imposes strict controls on children’s online activities and data collection.

 Text, Symbol, Electronics, Mobile Phone, Phone


OpenAI has recently established a dedicated team focused on child safety within the realm of AI. Named the Child Safety team, it collaborates with internal groups like platform policy, legal, and investigations, as well as external partners, to handle processes, incidents, and reviews related to underage users.

This initiative by OpenAI comes amid growing concerns about minors using AI tools, with the company also adhering to regulations like the U.S. Children’s Online Privacy Protection Rule, which imposes strict controls on children’s online activities and data collection. With the increased use of AI tools by kids and teens for educational and personal purposes, there is a rising need to address potential risks. OpenAI’s partnership with Common Sense Media and joint efforts in developing kid-friendly AI guidelines in an effort to promote responsible and safe AI usage among minors.

Why does it matter?

Kids and teens are increasingly relying on AI tools not only for academic help but also for personal issues. According to a poll by the Center for Democracy and Technology, 29% of kids have used ChatGPT for anxiety or mental health issues, 22% for friend-related concerns, and 16% for family conflicts.

Some potential risks of AI for children include:

  1. Privacy and safety issues: AI systems can collect personal information about children without their knowledge, which can be exploited by hackers or used for targeted advertising.
  2. Psychological effects: Children may be exposed to inappropriate content or dangerous misinformation through AI-powered platforms, which can have negative psychological and behavioural impacts.
  3. Overreliance on AI tools: There is a risk that children may rely too much on AI, affecting their decision-making and autonomy.
  4. Exposure to inappropriate content:  AI training databases can contain illegal images depicting child sexual abuse, which can aid image generation models in producing explicit images of children. AI-powered tech can perpetuate harmful stereotypes that feed racism and can be harmful to children.