ChatGPT introduces age prediction to strengthen teen safety
Additional protections will apply when ChatGPT detects accounts that may belong to minors, limiting exposure to sensitive or risky material.
New safeguards are being introduced as ChatGPT uses age prediction to identify accounts that may belong to under-18s. Extra protections limit exposure to harmful content while still allowing adults full access.
The age prediction model analyses behavioural and account-level signals, including usage patterns, activity times, account age, and stated age information. OpenAI says these indicators help estimate whether an account belongs to a minor, enabling the platform to apply age-appropriate safeguards.
When an account is flagged as potentially under 18, ChatGPT limits access to graphic violence, sexual role play, viral challenges, self-harm, and unhealthy body image content. The safeguards reflect research on teen development, including differences in risk perception and impulse control.
ChatGPT users who are incorrectly classified can restore full access by confirming their age through a selfie check using Persona, a secure identity verification service. Account holders can review safeguards and begin the verification process at any time via the settings menu.
Parental controls allow further customisation, including quiet hours, feature restrictions, and notifications for signs of distress. OpenAI says the system will continue to evolve, with EU-specific deployment planned in the coming weeks to meet regional regulatory requirements.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
