OpenAI explains approach to privacy, freedom, and teen safety
The company is prioritising sensitive AI conversations, ensuring privacy and monitoring serious risks to protect users.

OpenAI has outlined how it balances privacy, freedom, and teen safety in its AI tools. The company said AI conversations often involve personal information and deserve protection like privileged talks with doctors or lawyers.
Security features are being developed to keep data private, though critical risks such as threats to life or societal-scale harm may trigger human review.
The company is also focused on user freedom. Adults are allowed greater flexibility in interacting with AI, within safety boundaries. For instance, the model can engage in creative or sensitive content requests, while avoiding guidance that could cause real-world harm.
OpenAI aims to treat adults as adults, providing broader freedoms as long as safety is maintained. Teen safety is prioritised over privacy and freedom. Users under 18 are identified via an age-prediction system or, in some cases, verified by ID.
The AI will avoid flirtatious talk or discussions of self-harm, and in cases of imminent risk, parents or authorities may be contacted. Parental controls and age-specific rules are being developed to protect minors while ensuring safe use of the platform.
OpenAI acknowledged that these principles sometimes conflict and not everyone will agree with the approach. The company stressed transparency in its decision-making and said it consulted experts to establish policies that balance safety, freedom, and privacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!