Parents Soon Can Block Their Kids from Interacting with AI Chatbots on Instagram

1 day ago 3

TV, social media, junk food -- parents have been setting limits on their kids forever. Now, add AI chatbots to the list. Meta announced Friday that, starting in 2026, parents will be able to block teenagers from interacting with AI chatbots on Instagram. Parents will be able to block all access or block access to specific AI characters.

Meta, owner of Instagram, Facebook and WhatsApp, is adding the parental controls months after a report came out in August showing the company's AI guidelines allowed chatbots to "engage a child in conversations that are romantic or sensual." Another report came out earlier this month that said 3 in 5 children aged 13 to 15 encounter unsafe content or unwanted messages on Instagram.


Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The company said in a blog post Friday that the new AI chatbot controls align with parental concerns about "who (children are) interacting with, what type of content they're seeing, and whether their time is well-spent."

CNET AI Atlas badge
Zooey Liao/CNET

"We hope today's updates bring parents some peace of mind that their teens can make the most of all the benefits AI offers, with the right guardrails and oversight in place," said Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang in the blog post.

How parents can control AI chatbot interactions

Instagram chatbot control example
Meta

Teens can interact with AI chatbots through Instagram's direct message section. The chat could be with a creator's AI, a custom AI character, or Meta's general-use AI.

Meta said the new controls allow parents to turn off their teen's access to one-on-one chats with AI characters entirely or block specific AI characters if they don't want to turn off access to AI characters altogether.

Moreover, parents can "get insights into the topics their teens are chatting about with AI characters."

The company did not explain in detail how parents would be able to find out what AI topics their kids are chatting about.

Teens can still use Meta's regular AI assistant "with default, age-appropriate protections in place to help keep teens safe."

Expert: Controls are 'insufficient'

James Steyer, founder and CEO of digital advocacy and research nonprofit Common Sense Media, called Meta's new AI chatbot controls a "reactive concession" and insufficient.

"Meta's refusal to treat our kids' safety with the urgency it demands is deeply disappointing but unfortunately not surprising," Steyer told CNET. "For too long, this company has put the relentless pursuit of engagement over our kids' safety, ignoring warnings from parents, experts, and even its own employees."

Steyer said no one under 18 should use Meta AI chatbots "until their fundamental safety failures are fixed."

The representative for Meta says the company is continuing to improve safety.

"We've already gathered high-level inputs from experts which have shaped our initial thinking, and we will continue working with experts and parents to help ensure a thoughtful, privacy-conscious approach," the representative told CNET.

Further AI chatbot guardrails

Meta also described added protections around AI chatbots and teens:

  • AI characters are "designed to not engage in age-inappropriate discussions about self-harm, suicide, or disordered eating."
  • AI characters can only be focused on "age-appropriate topics like education, sports, and hobbies."
  • Parents can see if their teens are chatting with AI characters.

Earlier this week, Instagram said it would only allow teens to see content "similar to what they'd see in a PG-13 movie," under its new guidelines for Teen Accounts.

Read Entire Article