An Alarming Number of Teens Say They Turn To AI For Company, Study Finds

4 hours ago 11

We have a whole new generation growing up in the dawn of artificial intelligence. The early signs of its impact are alarming.

A British youth charity called OnSide surveyed 5,035 young people between the ages of 11 to 18 for the “Generation Isolation Report,” its annual study on how the youth spend their free time. The results paint a rather bleak picture.

The survey found that two-in-five teens turn to AI for advice, company or support, with 20% of those that do saying that talking to AI is easier than talking to a real person.

“AI support is instant, but no substitute for the trust, empathy and understanding of a human conversation,” OnSide chief executive Jamie Masraff said in the report.

Over half of the young respondents said that they turned to AI specifically for advice on things like clothes, friendships, mental health or to have AI help them through emotions like sadness and stress. One-in-ten said that they were choosing AI because they just wanted someone to talk to.

The study and its findings show a generation that is lonely and one that has unrestricted access to technology that is addictive in nature. According to the study 76% of young people spend most of their free time on screens, and that 34% report feeling high or very high feelings of loneliness.

AI, which is still in its under-regulated Wild West era, is one such technology, and it’s no surprise that lonely young people turn to it for quick companionship and advice.

“It’s clear that the interlinked issues of loneliness, digital dependence and isolation have become entrenched in young people’s lives, raising deeper questions about what it’s like to grow up this way,” Masraff said.

As AI burrows itself deeper into the everyday lives of teens, alarm bells are sounding. AI chatbots have turned out to be dangerously addictive for some adults, whose brains have reached full-functioning capacity. Now imagine how much worse it could get for kids whose pre-frontal cortices are far from completion.

The American Psychological Association has been pushing the FTC to address the use of AI chatbots as unlicensed therapists. The Association wrote in a blog post from March that chatbots used for mental health advice could endanger users, especially “vulnerable groups [that] include children and teens, who lack the experience to accurately assess risks.”

In some instances, the results have allegedly been fatal. Two separate families have filed complaints with artificial intelligence companies Character.AI and OpenAI, claiming that the companies’ chatbots had influenced and aided their sons’ suicide. In one case, OpenAI’s ChatGPT helped a 16-year-old with planning his suicide and even discouraged him from letting his parents know of his suicidal ideation.

Several AI chatbots are also being investigated over sexualized conversations with children. Meta was lambasted earlier this year after a leaked internal document showed that the tech giant had okayed its AI tools to engage in “sensual” chats with children.

Last month, Congress introduced a bipartisan bill called the GUARD Act, with the aim of forcing AI companies to institute age verification on their sites and block users under 18 years of age.

“AI chatbots pose a serious threat to our kids,” Sen. Josh Hawley, who introduced the bill along with Sen. Richard Blumenthal, told NBC News. “More than seventy percent of American children are now using these AI products.”

But even if that bill becomes law, it’s uncertain how effective it will be at keeping children away from AI chatbots. Age verifications and limits used by social media platforms haven’t been the most effective tools at guarding children from the adverse effects of the internet.

Read Entire Article