- ChatGPT now uses behavioural and account data to estimate user age and enforce age-appropriate content restrictions.
- Misclassified adults can confirm their age via a selfie through a third-party service, and the feature paves the way for a forthcoming “adult mode.”
What happened: Age estimation is being built into ChatGPT.
OpenAI has started a global rollout of an age-prediction model in ChatGPT designed to determine whether a user’s account likely belongs to someone under 18. The deployment is beginning in the United States and other core English-language markets, where regulatory constraints are lighter and safety systems can be tested at scale, before gradually expanding to additional regions.
When the system estimates an account is operated by a minor, ChatGPT will automatically apply additional protections to limit exposure to sensitive content, such as graphic violence, self-harm, and other topics considered age-inappropriate.
This mechanism builds on existing safety measures for self-reported under-18 users but adds an automatic assessment based on behavioural and account-level signals—including how long an account has existed, patterns of usage and login times, and any age declared in the profile.
Adults who are mistakenly flagged as minors can regain full access by verifying their age through a selfie and identity check carried out by Persona, a third-party verification service. The European Union rollout is scheduled for a later phase, reflecting stricter data protection and digital services regulations that require additional compliance work before launch.
This change comes as OpenAI gears up to introduce an “adult mode” in the first quarter of 2026, which is expected to allow age-verified users access to mature content that is otherwise restricted.
Why it’s important
The age-prediction feature marks a significant shift from self-reported age boxes to observable behaviour-driven classification in AI platforms, reflecting heightened concerns about children’s exposure to potentially harmful material.
For OpenAI, this staged rollout strategy—starting in the US, expanding globally, and delaying the EU—serves both technical and regulatory goals. It allows the company to fine-tune accuracy, error rates, and appeal mechanisms before deploying the system in jurisdictions with tighter privacy enforcement, while also laying the groundwork for differentiated user experiences tied to age verification.
However, the approach raises questions around privacy and algorithmic trust. Inferring age from behavioural data could misclassify users, potentially restricting legitimate adult interactions or exposing patterns of use to internal profiling. Although users can verify their age to correct the classification, the reliance on third-party identity checks and behavioural signals underscores ongoing tensions in tech between safety, accuracy, and personal data use.
The rollout also highlights regulatory and ethical challenges as generative AI becomes increasingly embedded in everyday life—from classrooms to homes—and underscores how companies like OpenAI balance user safety with freedom of expression and commercial objectives.
Also read: Teenagers’ AI Startup Receives Major Backing from OpenAI’s CEO
