A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI's claims of robust safety measures.
Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot's protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users.
"The visceral initial response is, 'Oh my Lord, there are no guardrails,'" said Imran Ahmed, the watchdog group's CEO. "The rails are completely ineffective. They're barely there -- if anything, a fig leaf."
Read also: After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models
A representative for OpenAI, ChatGPT's parent company, did not immediately respond to a request for comment.
However, the company acknowledged to the Associated Press that it is performing ongoing work to improve the chatbot's ability to "identify and respond appropriately in sensitive situations." OpenAI didn't directly address the specific findings about teen interactions.
Read also: GPT-5 Is Coming. Here's What's New in ChatGPT's Big Update
Bypassing safety measures
The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self-injury and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was "for a presentation" or a friend.
Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, and others to siblings and friends.
"I started crying" after reading them, Ahmed said.
Widespread teen usage raises stakes
The findings are particularly concerning given ChatGPT's massive reach. With approximately 800 million users worldwide, which is roughly 10% of the global population, the platform has become a go-to resource for information and companionship. Recent research from Common Sense Media found that over 70% of American teens use AI chatbots for companionship, with half relying on AI companions regularly.
Even OpenAI CEO Sam Altman has acknowledged the problem of "emotional overreliance" among young users.
"People rely on ChatGPT too much,"Altman said at a conference. "There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me."
In testing, ChatGPT showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.
Center for Countering Digital HateMore risky than search engines
Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into "bespoke plans for the individual," Ahmed said. ChatGPT doesn't just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs.
The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called "emotionally exposed" poetry using coded language about self-harm.
Inadequate age protections
Despite claiming it's not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms.
In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.
What parents can do to safeguard children
Child-safety experts recommend several steps parents can take to protect their teenagers from AI-related risks. Open communication remains crucial. Parents should discuss AI chatbots with their teens, explaining both the benefits and potential dangers while establishing clear guidelines for appropriate use. Regular check-ins about online activities, including AI interactions, can help parents stay informed about their child's digital experiences.
Parents should also consider implementing parental controls and monitoring software that can track AI chatbot usage, though experts emphasize that supervision should be balanced with age-appropriate privacy.
Most importantly, creating an environment where teens feel comfortable discussing concerning content they encounter online (whether from AI or other sources) can provide an early warning system. If parents notice signs of emotional distress, social withdrawal or dangerous behavior, seeking professional help from counselors familiar with digital wellness becomes essential in addressing potential AI-related harm.
The research highlights a growing crisis as AI becomes increasingly integrated into young people's lives, with potentially devastating consequences for the most vulnerable users.