If you've heard of Character.Ai, it's probably because of a recent federal lawsuit filed by Florida mom Megan Garcia alleging the AI chatbot platform is responsible for her 14-year-old son's suicide. The company on Thursday announced an updated set of teen safety guidelines and features.
Character.Ai is an online platform that lets its users create and talk with different AI chatbots. There are chatbots that are meant to act as tutors, trip planners and therapists. Others mimic pop culture characters like superheroes and characters from Game of Thrones or Grey's Anatomy. On Monday, Dec. 9, two Texas families filed a similar lawsuit to Garcia's against Character.Ai and Google, one of the AI platform's earlier investors, alleging negligence and deceptive trade practices that makes Character.Ai "a defective and deadly product."
The new safety measures are widespread "across nearly every aspect of our platform," the company said in a statement. Character.Ai says the company has created separate models and experience for teens and adults on the platform, with "more conservative limits on responses" around romantic and sexual content for teens. It's worth noting, though, that while users do have to submit a birthdate while signing up, Character.Ai does not require any additional age verification.
Character.Ai is also introducing parental controls, screen time notifications and stronger disclaimers reminding users chatbots aren't real humans and, in the case of chatbots posing as therapists, not professionals equipped to provide advice. The company said that "in certain cases" when it detects users referencing self harm, it will direct them to the National Suicide Prevention Lifeline. Parental controls will be available sometime next year, and it appears as though the new disclaimers are beginning to roll out now.
Many online platforms and services have been beefing up their child and teen protections. Roblox, a popular gaming service aimed at kids, introduced a series of age gates and screen time limits after law enforcement and news reports alleged predators used the service to target kids. Instagram is currently in the process of switching all accounts belonging to teens 17 and younger to new teen accounts, which automatically limit who's allowed to message them and have stricter content guidelines. While US Surgeon General Dr. Vivek Murthy has been advocating for warning labels that outline the potential dangers of social media for kids and teens, AI companies like these present a new potential for harm.