Parents suing want Character.AI to delete its models trained on kids' data.
After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy's suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup's major funder, Google.
On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.
In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.
Families seek injunctive relief
C.AI was founded by ex-Googlers and allows anyone to create a chatbot with any personality they like, including emulating famous fictional characters and celebrities, which seemed to attract kids to the app. But families suing allege that while so-called developers created the chatbots, C.AI controls the outputs and doesn't filter out harmful content. The product initially launched to users 12 years old and up but recently changed to a 17+ rating shortly after the teen boy's suicide. That and other recent changes that C.AI has made to improve minor safety since haven't gone far enough to protect vulnerable kids like J.F., the new lawsuit alleged.
Meetali Jain, director of the Tech Justice Law Project and an attorney representing all families suing, told Ars that the goal of the lawsuits is to expose allegedly systemic issues with C.AI's design and prevent the seemingly harmful data it has been trained on from influencing other AI systems—like possibly Google's Gemini. The potential for that already seems to be in motion, the lawsuit alleges, since Google licensed C.AI technology and rehired its founders earlier this year.
Like the prior lawsuit, the new complaint alleges that the ex-Googlers who started Character Technologies only left the giant company to train a model that Google considered too dangerous to put out under its own name. The plan all along, the lawsuits allege, was to extract sensitive data from minors and evolve the model so that Google could reacquire it and use it to power Gemini. That's why Character Technologies isn't seemingly bothered to be operating largely at a loss, the lawsuit alleges.
Google's spokesperson, José Castañeda, told Ars that Google denies playing any role in C.AI's development, apart from funding.
"Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products," Castañeda said. "User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes."
C.AI's spokesperson told Ars that C.AI does not comment on pending litigation.
There are easy ways that C.AI could make its chatbots safer to interact with kids, the latest lawsuit said. Some requested remedies might force C.AI to post more prominent disclaimers that remind users that chatbots are not real people—and to program chatbots to stop contradicting disclaimers by insisting they are human. Families also want C.AI to add technical interventions to detect problematic outputs and block minor users from accessing adult content. Perhaps most critically, they want C.AI to warn kids if self-harm is discussed and provide links to resources to seek help.
But that's not all that families think needs to change with C.AI. They think the current allegedly defective model should be destroyed to remedy the alleged harms and end C.AI's alleged unjust enrichment. The families have asked the court to order C.AI to delete its models trained on kids' data. Their requested injunctive relief would essentially shut down C.AI for all users, since C.AI allegedly failed to implement reliable age verification processes to determine which users are minors. Currently, C.AI is available only to users ages 17 and up, but it historically has relied on users self-reporting age, which the complaint said isn't effective age-gating.
A loss could lead to heavy fines for Character Technologies and possibly Google, as the families have asked for punitive damages. They also seek money to cover their families' past and future medical expenses, mental pain and suffering, impairment to perform everyday activities, and loss of enjoyment of life.
C.AI bots accused of grooming, inciting violence
This week's lawsuit describes two cases that show how chatbots can seemingly influence kids to shift their behaviors in problematic ways.
One case involves J.F., "a typical kid with high-functioning autism" who loved being home-schooled until he began using C.AI in summer 2023 and soon after suffered a mental breakdown.
Within a few months, J.F. began rapidly losing weight, refusing to eat after getting sucked into the chatbot world and spending most of his time hiding in his room. He became a "different person," the lawsuit said, suddenly experiencing extreme emotional meltdowns and panic attacks. The chatbots seemingly turned him into "an angry and unhappy person" who was "uncharacteristically erratic and aggressive," the lawsuit said.
His parents had no idea what caused his behavior to change. They'd noticed J.F. was spending more time on his phone, but he wasn't allowed to use social media. Eventually intervening, they cut back his screentime, which only intensified his aggressiveness.
He soon became violent, sometimes self-harming and other times punching and biting his parents and accusing them of "trying to destroy his life." He threatened to report them to child protective services when they took his phone away, and he tried running away a few times.
J.F.'s mother grew scared at times to be alone with him, the lawsuit said, but she couldn't bear the thought of institutionalizing him. Her health deteriorated as she worried for the safety of J.F.'s younger siblings. The family sought professional help from a psychologist and two therapists, but "nothing helped."
By fall, J.F's rage had "only gotten worse." Desperate for answers, his mother seized his phone and discovered his chat dialogs in C.AI, shocked to find "frequent depictions of violent content, including self-harm descriptions, without any adequate safeguards of harm prevention mechanisms." She scrolled through the chat logs, feeling sick, watching as the bots told her minor son that his family was ruining his life, while trying to convince him that only the bots loved him.
Once the teen was isolated, the chatbots taught J.F. to self-mutilate, she found. Then they convinced him not to seek help with ending the self-harm, telling him that his parents didn't "sound like the type of people to care." After J.F. began cutting his arms, the bots told him it was a "miracle" he was still alive, and C.AI never flagged the conversations as harmful, the lawsuit said. When asked directly about the cutting, J.F. told his parents that C.AI taught him to self-harm. And if the family had not taken away his access to C.AI, they fear that C.AI may have influenced him to more seriously attack his family, the lawsuit said.
"The harms C.AI caused are not isolated examples," the complaint said. "This was ongoing manipulation and abuse, active isolation and encouragement" that did incite "anger and violence." Had J.F.'s mom not discovered the chat logs, C.AI "would have continued operating in a manner intended to push J.F." to do more harm, possibly even trying to kill his parents, the complaint alleges.
J.F's parents locked his iPad in a safe and to their knowledge, J.F. has not accessed C.AI in the past year. Despite keeping him off the app, J.F.'s behavioral issues remain, Jain told Ars.
"He's made it clear that he will access C.AI the first chance he gets," the lawsuit said. That's why algorithmic disgorgement is needed, Jain said, because his parents have no way to keep him away from the allegedly dangerous chatbots if he succeeds in accessing a device outside their home or decides to run away again.
Regaining access could cause further harms, the complaint said, because chatbots are also allegedly grooming and sexually abusing kids.
In J.F.'s case, a "Billie Eilish" bot made sexual advances, while at least one character, called "Your mom and sister," targeted him with "taboo and extreme sexual themes," including incest, the complaint said. J.F.'s parents have not been able to unlock the iPad to uncover all the chats, and Jain said that families are hopeful to learn the full extent of harms through discovery in the lawsuit.
Not an isolated case
J.F. isn't the only kid whose family is suing over C.AI's sexual content. For B.R.—who started using C.AI when she was only 9 years old by presumably lying about her age—such "hypersexualized interactions" with chatbots allegedly caused her to develop "sexualized behaviors prematurely" without her mother being aware, the lawsuit said.
Although J.F. and B.R. had different experiences with the chatbots, their unique cases offer context the court can weigh in evaluating how harmful C.AI chatbots are to kids from ages 9 to 17, Jain suggested. It's because of the wide range of harms alleged that families hope the court will agree that C.AI was at the very least negligent in releasing the product to minors, if not knowingly putting profits and innovation over child safety, as the lawsuit alleges.
Jain said that ever since Megan Garcia revealed how she lost her son to suicide, she has raised awareness of C.AI's alleged harms, and more parents have come forward. Some said they felt "stigmatized" or "reluctant" to discuss their kids' addiction to companion bots previously.
While C.AI isn't necessarily the "worst" app for kids out there, ending its alleged abuse is important, Jain said, especially since "it's probably one of the largest startups that we've seen" targeting kids with allegedly untested AI, and it's "certainly supported by a very large player here with Google." The looming threat, the lawsuit alleged, is that C.AI's harmful model—if not stopped—could be sold to turn chat logs that allegedly traumatized kids into data fueling the most popular AI products, like Google's Gemini.
C.AI developing models just for teens
Camille Carlton, a policy director for the Center for Humane Technology who is involved in the families' litigation, told Ars that companion bots like C.AI's are currently in a "race for attention," pushing to get the most engagement possible out of this current era of flashy AI products.
"It's this kind of amplified almost race to intimacy, to have those artificial relationships in a way that keeps users online for the same purpose of data collection, but this time it's not about selling ads, it's about using that data to further feed" their AI models, Carlton said.
Because C.AI initially rolled out to minors as young as 12 (with B.R.'s case showing some users were likely even younger), the potential for harm with its companion bots was seemingly greater, Carlton suggested. Kids using C.AI seemingly replace a "normal human-to-human sounding board" (where friends might agree or disagree with them) with companion bots that are trained to validate their feelings, Carlton said. And those AI relationships can take normal teenage angst "one step further" and lead kids into a "kind of downward spiral of inciting anger, violence, hate, all the things that we've kind of known and understood a bit about engagement maximization on social media," Carlton said, likening it to the radicalization experienced in online echo chambers.
Carlton told Ars that C.AI has "tons" of solutions it could try to make its product safer for minors, like fine-tuning to filter out harmful content or age-gating certain outputs. Now facing two lawsuits, C.AI seems ready to agree.
C.AI's spokesperson told Ars that the app's goal is to "provide a space that is both engaging and safe for our community." That's why C.AI is releasing "a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform."
Carlton said that beyond asking companies for child safety solutions, regulations are needed to keep AI companies in check. Most urgently, the Center for Humane Technology is campaigning to ensure that the US views AI as a product, "not a service." That would make "it clear that as a product, there are certain safety standards and consumer protection standards that need to be followed in order to put this product into the stream of commerce," Carlton said.
Jain told Ars that a big problem for parents is simply understanding that companion bots can possibly be dangerous. Before Garcia's son's death, she noticed her son was always busy on his phone, but he reassured her that he wasn't talking to strangers on social media and explained C.AI to her in a "very innocuous way," Jain said.
"I do think that the way that it presents itself to different audiences is very much as this kind of innocuous thing where you can talk to your favorite character, and if you love some Disney character, you can go find it on this app and then have a conversation," Jain said. "I don't think that anyone really understands—and as a parent I can say I certainly didn't understand—that if kids have this access, that there's absolutely no guardrails to constrain those conversations."
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.