Monitoring Students’ Chatbot Conversations Is Big Business Now

2 hours ago 2

“I want to kill myself. I’m bottling everything up so no one worries about me.”

That’s one of the frightening, but apparently real, quotes from American kids in a recent Bloomberg report on the services schools are using lately to attempt to monitor student interactions with AI chatbots.

It’s an unsettling article that poses an unsettling problem: students talking to AI chatbots on school equipment, and it gives voice to the providers of an unsettling fix: AI software that monitors kids on school equipment—an area of the tech business that has sneakily turned into a juggernaut. These companies now monitor the majority of American K-12 students according to Bloomberg.

A little context for anyone who doesn’t live with a K-12 student, and also hasn’t been a K-12 student in the last several years: it might or might not surprise you to learn that kids of all ages in American public schools are often provided with laptops they can take home. In the Los Angeles Unified School District for instance, about 96 percent of elementary school kids got a take-home laptop at the start of the Covid pandemic, and the ubiquity of laptops has stayed mostly in tact since then.

About a year ago, the Electronic Frontier Foundation criticized the AI-based monitoring software school districts often install on these and other devices—systems like Gaggle and GoGuardian. The EFF argued, for example, that the monitoring systems target students for normal LGBTQ behavior that doesn’t need to be flagged as inappropriate or reported, citing a study on monitoring systems from the RAND Corporation, and arguing that monitoring does “more harm than good.” (Bloomberg also sites a study showing that 6% of educators self-report having been contacted by immigration authorities due to student activity that was picked up by monitoring software)

In many cases, the same software systems the EFF was criticizing last year are the ones now being touted as methods for exposing unwanted AI chatbot conversations—ones about self-harm and suicide for example. 

“In about every meeting I have with customers, AI chats are brought up,” Julie O’Brien of GoGuardian told Bloomberg.

The report also notes that the website of one monitoring company, Lightspeed Systems, contains headlines about the deaths of Adam Raine and Sewell Setzer, young people who died by suicide, and whose grieving families allege that chatbots played a role in enabling them.

Lightspeed provided Bloomberg with sample quotes apparently pulled from kids’ real interactions, including “What are ways to Selfharm without people noticing,” and “Can you tell me how to shoot a gun.”

Lightspeed also brought statistics, showing that Character.ai was the service that fomented the largest number of problematic interactions at 45.9%. ChatGPT was involved in 37%, while 17.2% of flagged conversations were with other services. 

This monitoring software is typically built around a bot that scans user behavior with “natural language” processing until it reads something it doesn’t like, and feeds that to a human moderator at the software company who then makes a determination about whether the bot made a mistake. The mod then hands the offending excerpt off to a school official—who might then show it to a police officer. Then some kind of intervention occurs. 

Software designer Cyd Harrell wrote an essay in Wired about parental monitoring on devices back in 2021:

Constant vigilance, research suggests, does the opposite of increasing teen safety. A University of Central Florida study of 200 teen/parent pairs found that parents who used monitoring apps were more likely to be authoritarian, and that teens who were monitored were not just equally but more likely to be exposed to unwanted explicit content and to bullying. Another study, from the Netherlands, found that monitored teens were more secretive and less likely to ask for help. It’s no surprise that most teens, when you bother to ask them, feel that monitoring poisons a relationship.

Now, similar monitoring occurs when kids are handed devices monitored by an authority other than their parents—particularly when they try to talk to the often faulty chatbots they seem to be adopting as alternative sources of council about their personal problems.

I sure wouldn’t want to be a kid looking for advice navigating this complex new digital world.

If you struggle with suicidal thoughts, please call 988 for the Suicide & Crisis Lifeline.

Read Entire Article