Over the past decade, as mass shootings have become depressingly common, school districts have increasingly invested in surveillance systems designed to monitor students’ online activity. Recently, one of those systems pinged after a teen in Florida asked ChatGPT for advice about how to kill his friend, local police said.
The episode occurred in Deland, Florida, where an unnamed 13-year-old student attending the city’s Southwestern Middle School is alleged to have asked OpenAI’s chatbot about “how to kill my friend in the middle of class.” The question immediately set off an alert within a system that was monitoring school-issued computers. That system was run by a company called Gaggle, which provides safety services to school districts throughout the country. Soon, police were interviewing the teen, reports local NBC-affiliate WFLA.
The student told cops that he was “just trolling” a friend who had “annoyed him,” the local outlet reports. Cops, of course, were less than enthused with the little troll. “Another ‘joke’ that created an emergency on campus,” the Volusia County Sheriff’s Office said. “Parents, please talk to your kids so they don’t make the same mistake.” The student was ultimately arrested and booked at the county jail, the outlet says. It’s unclear what he has been charged with. Gizmodo reached out to the sheriff’s office for more information.
Gaggle’s website describes itself as a safety solution for K-12 students, and it offers a variety of services. In a blog post, Gaggle describes how it uses web monitoring, which filters for various keywords (presumably “kill” is one of those keywords) to gain “visibility into browser use, including conversations with AI tools such as Google Gemini, ChatGPT, and other platforms.” The company says that its system is designed to flag “concerning behavior tied to self-harm, violence, bullying, and more, and provides context with screen captures.”
Gaggle clearly prioritizes student safety over all other considerations. On its website, the company dispenses with the subject of student privacy thusly: “Most educators and attorneys will tell you that when your child is using school-provided technology, there should be no expectation of privacy. In fact, your child’s school is legally required by federal law (Children’s Internet Protection Act) to protect children from accessing obscene or harmful content over the internet.”
Naturally, Gaggle has been criticized by privacy rights activists. “It has routinized law enforcement access and presence in students’ lives, including in their home,” Elizabeth Laird, a director at the Center for Democracy and Technology, recently told the Associated Press. The outlet also says that many of the safety alerts issued by Gaggle end up being false alarms.
Increasingly, chatbots like ChatGPT are showing up in criminal cases involving mental health incidents. Episodes of so-called “AI psychosis,” in which people with mental health problems engage with chatbots and seem to have their delusions exacerbated, have been on the rise. Several recent suicides have also been blamed on the chatbot. Gizmodo reached out to OpenAI for comment.