OpenAI announced today that it's developing a "different ChatGPT experience" tailored for teenagers, a move that underscores growing concerns about the impact of AI chatbots on young people's mental health.
The new teen mode is part of a broader safety push by the company in the wake of a lawsuit by a family who alleged ChatGPT's lack of protections contributed to the death by suicide of their teenage son. The changes include age-prediction technology to keep kids under 18 out of the standard version of ChatGPT. According to the announcement, if the system can't confidently estimate someone's age, ChatGPT will automatically default to the under-18 experience.
"We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection," wrote OpenAI CEO Sam Altman in a blog post.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
What's new in ChatGPT's teen mode
OpenAI says the teen version will come with stricter built-in limits, such as:
- Content filters: No flirtatious conversations or discussions of self-harm, even in a fictional or creative-writing context.
- Crisis response: If a teen expresses suicidal thoughts, OpenAI may try to alert parents, and in emergency situations, even contact authorities.
- Parental controls: Parents can link accounts, set rules for how ChatGPT responds and enforce "blackout hours" when the app is off-limits.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The larger context
OpenAI's announcement comes just hours before a Senate hearing in Washington, DC, examining AI's potential threat to young people. Lawmakers have been pressing tech companies on teen safety following lawsuits that accuse AI platforms of worsening mental health struggles or providing harmful health advice.
OpenAI's approach mirrors earlier moves by companies like Google, which spun up YouTube Kids after criticism and regulatory pressure. Altman's blog post frames the move as part of a broader balancing act between safety, privacy and freedom. Adults, he argues, should be treated "like adults" with fewer restrictions, while teens need added protection -- even if that means compromising on privacy, like asking for IDs.
Read also: OpenAI Wants You to Get a Certificate in ChatGPT and Find Your Next Job
The company says it will roll out this teen-focused experience by the end of the year. History suggests, however, that savvy teens often find workarounds to get unrestricted access. It will certainly be a question whether these guardrails are enough to protect tech-literate teens who are comfortable climbing over them.
If you feel like you or someone you know is in immediate danger, call 911 (or your country's local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you're struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.