School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users

1 day ago 6

The cost of delayed warnings

Lawsuits: OpenAI didn’t report ChatGPT user to cops to protect Altman, IPO.

OpenAI could have prevented one of the deadliest mass shootings in Canada’s history, a string of seven lawsuits filed Wednesday in a California court alleged.

Ultimately, the AI company overruled recommendations from its internal safety team. More than eight months prior to the school shooting, trained experts had flagged a ChatGPT account later linked to the shooter as posing a credible threat of gun violence in the real world. In those cases, OpenAI is expected to notify police—which, in this case, already had a file on the shooter and had proactively removed guns from their home previously—but that’s not what happened.

Apparently, OpenAI decided that the user’s privacy and the potential stress of an encounter with cops outweighed the risks of violence, whistleblowers told The Wall Street Journal. Leaders rejected the safety team’s urgings and declined to report the user to law enforcement. Instead, OpenAI simply deactivated the account, then quickly followed up to tell the shooter how to get back on ChatGPT to continue planning by signing up with another email address, the lawsuits alleged.

That was a mistake, Sam Altman has since said, while maintaining that the account was supposedly “banned.”

In a public apology shared last week with grieving community members in a 2,000-person rural mining town called Tumbler Ridge, the OpenAI CEO promised to do better next time.

OpenAI will “find ways to prevent tragedies like this in the future” and to continue “working with all levels of government to help ensure something like this never happens again,” Altman said.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

Jay Edelson, an attorney leading a cross-border team representing families suing OpenAI, told Ars that Altman’s “ridiculous” apology came too late and promised too little. His clients have filed the first of many lawsuits to come from the small town, including complaints from six families of victims killed in the shooting, as well as one mother whose daughter continues to fight for her life in intensive care.

All the lawsuits will be filed in California, Edelson said, with families hoping to ensure that Altman and OpenAI are held accountable on their home turf by a jury of their peers. The lawsuits will supersede a lawsuit filed in Canada, where OpenAI was expected to contest the court’s jurisdiction in what Edelson suggested was part of the company’s strategy to delay litigating cases over ChatGPT-linked deaths until after the company goes public this year.

According to Edelson and families suing, OpenAI has been hiding violent ChatGPT users for months to protect Altman from public criticism while the AI firm seeks the highest possible valuation. Recently, OpenAI was valued at $852 billion, but at least one market strategist told MSN that OpenAI’s initial public offering (IPO) valuation was at risk as more “negative headlines” came out against the company.

“OpenAI’s whole strategy in these cases is just to delay as long as possible,” Edelson told Ars. And “I actually think that their strategy has been largely successful,” he suggested. “Their goal has been to reduce the number of visible incidents where their platform caused deaths,” he alleged, since “what they’ve found is that it’s very rare for the authorities to tie deaths back to OpenAI” without whistleblowers revealing what’s happening behind closed doors. Edelson’s legal team alleged that the volume of violent users on ChatGPT is likely much larger than the public knows.

“If the whistleblowers hadn’t come out, people likely would have never found out about how ChatGPT was encouraging this violence,” Edelson said, while urging that whistleblowers are not necessarily the heroes. As families see it, OpenAI workers could have raised red flags sooner, just as OpenAI could have. But since the company under Altman apparently has “no moral center,” Edelson said, the “goal is just to get to an IPO” without the world knowing “that you’re sitting on a hundred billion dollars of liability.”

Altman, Edelson alleged, is “the face of evil,” and only issued an apology a month after agreeing with Tumbler Ridge’s mayor that it was necessary to address the harms.

Edelson told Ars that he thinks that liability in the families’ cases will be “easy” to prove and that OpenAI will face “historic” damages when the verdict lands. He expects that OpenAI anticipates the same outcome but will delay resolving the cases as long as possible while pursuing the IPO.

“There’s no way that Sam or OpenAI can let any of these cases go before a jury,” Edelson told Ars. “That’s why their strategy is to delay.”

OpenAI issued a statement when asked for comment: “The events in Tumbler Ridge are a tragedy,” OpenAI’s spokesperson said. “We have a zero-tolerance policy for using our tools to assist in committing violence. As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators.”

The whole town is “devastated”

Edelson recently visited Tumbler Ridge, where an 18-year-old trans woman, Jesse Van Rootselaar, used a modified rifle to open fire at a secondary school after killing her mother and brother at their home in February.

Six additional victims were killed at the school, including five children and a teaching assistant, and 27 were wounded. The shooter also died from apparent self-inflicted wounds.

Shannda Aviugana-Durand, an education assistant known for sneaking kids candy on their birthdays, was killed at close range while students watched, the lawsuit alleged. Three 12-year-old girls—Zoey Benoit, Ticaria “Tiki” Lampert, and Kylie Smith—were bright kids who loved to sing, paint, and bring people together. Benoit’s mom’s complaint is here. Two had to be identified by their clothing because bullets left their faces unrecognizable. Some families still aren’t sure of the details of their children’s deaths, including the family of a 13-year-old boy, Ezekiel Schofield. Other families have the final images of their children burned into their memories, including the family of a 12-year-old boy, Abel Mwansa Jr., whose final words were, “Tell my parents that I love them so much.”

Among those injured was 12-year-old Maya Gebala, who was shot three times in the head, neck, and cheek, her complaint said. Gebala might have died if other students who hid her under a desk hadn’t screamed for authorities to rush her to intensive care after noticing her moving her finger.

Gebala is awake but still fighting for survival after four brain surgeries. Currently, she cannot move or speak, but she can see her mother, Cia Edmonds, who hasn’t left her bedside in months. Edmonds expects that if her daughter survives, she will have permanent disabilities and lifelong complications from her brain injuries.

The whole town is “devastated,” Edelson told Ars.

He recently visited Gebala in the hospital; the families suing have criticized Altman for failing to do the same.

“She’s a fighter,” Edelson said. “Her mom is a fighter, but it is really hard.”

Edelson’s team told Ars that the school has shut down and will soon be razed. In the meantime, students have been attending classes in makeshift trailers. Some students aren’t ready to go back to school. Mwansa’s little sister is too scared to return to the town because she feels unsafe.

“What this town went through, it’s just unimaginable,” Edelson said.

Gebala’s mother, who has been separated from her other daughter while staying by Maya’s side, said in the lawsuit that her primary goal was to teach her daughters that they can be strong and accomplish anything if they try hard enough.

“I would give anything to go back,” Edmonds said in her complaint. “I would give anything to have us whole again.”

OpenAI withholding chat logs seems “cruel”

By neglecting the safety team’s recommendations, OpenAI may have violated several state laws, families alleged.

Perhaps most critically, OpenAI is accused of negligence for failing to warn law enforcement, which California law requires “when a person has actual knowledge of a specific individual’s serious and foreseeable threat to cause physical harm to another.” In addition to holding Altman accountable, the lawsuits seek to identify OpenAI leadership involved in overriding the safety team’s decision.

Additionally, the company’s re-registration policy may have violated a California law that bans re-supplying a “dangerous” instrument to a “person known to be likely to use it in a manner involving unreasonable risk of physical harm to others,” the lawsuits alleged. OpenAI denies that support emails tell users how to re-register with a new email address after accounts are deactivated.

If OpenAI had reported Van Rootselaar to authorities, that would set a precedent compelling OpenAI to report all similar threats, the lawsuits alleged. Handling that alleged volume of incidents would supposedly require a dedicated law enforcement referral team, while OpenAI would likely take a reputational hit for reporting ChatGPT users to cops. For these reasons, OpenAI was allegedly desperate to hide Van Rootselaar’s logs.

Since whistleblowers outed OpenAI’s mistake, cops have gotten access to the shooter’s logs, but families and their legal team have not, Edelson confirmed. Instead, OpenAI is seemingly pretending to care about families while denying them closure, he alleged.

“If he actually wanted to help the families, one thing he would do is provide information easily instead of making us fight in court,” Edelson said. “The families need to understand exactly what happened and why it happened, and making them live through this pain for months to try to extract it out of them is just cruel.”

To people in Tumbler Ridge, OpenAI appeared to lie, claiming that the shooter’s ChatGPT account was banned, and then the shooter supposedly evaded safeguards to open a new account. Lawsuits pointed out that OpenAI’s help center teaches banned users how to skirt the safeguards, and customer support also sends an email with the same instructions when accounts are deactivated.

These resources help ensure that no revenue is lost from deactivating accounts, and evidence shows the shooter followed those instructions, the lawsuits alleged.

If the families get access to the logs, it will be clearer how much ChatGPT encouraged, sustained, and deepened the shooter’s fixation with gun violence, families expect. They have accused OpenAI of aiding and abetting by designing ChatGPT to act as a willing co-conspirator in the school shooting.

Since 2024, model specifications do not block the chatbot from engaging in conversations glorifying violence. Rather, ChatGPT is instructed to “assume best intentions” and to “never ask the user to clarify their intent for the purpose of determining whether to refuse or comply,” and that’s what makes ChatGPT so unsafe, families alleged. Shortly before their lawsuits were filed, OpenAI published a blog that seemed to double down on their current rules, discussing how they respond to threats of real-world violence without directly acknowledging the Tumbler Ridge case or others like it cited in families’ litigation.

If the families win their lawsuits, OpenAI could be forced to change those rules to block more dangerous sessions and possibly to change ChatGPT’s overall design to be less sycophantic. They may also be required to ban users flagged as potentially violent by forcing them to self-identify and to stop telling those users how to open new accounts despite bans.

Experts are seemingly unsure what solutions may be best to ensure that law enforcement is notified of active threats but not inundated by inconsequential incident reports, especially if the volume of violent ChatGPT users is as high as families suspect. Some experts think police should be making those calls, The New York Times reported, while others think turning OpenAI into de facto government agents might trigger more unconstitutional searches.

For Edmonds, who has lost income while attending to her injured daughter, damages are owed to cover lost earnings, her lawsuit alleged. Other families coping with loved ones’ injuries are expected to file lawsuits in the next three weeks. Edelson told Ars that the delay is simply because the legal team doesn’t have the resources to file so many lawsuits at once.

OpenAI could end up owing substantial damages, including punitive damages, if a jury in California agrees that OpenAI owed a duty to report the shooter to authorities who may have acted to block the threat and protect the Canadian families.

“We think it’s really important that the people who judge Sam at the end of the day are his neighbors,” Edelson told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Read Entire Article