ChatGPT cheating is endemic in schools, and no one knows what to do

1 month ago 28

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

Why it matters: Remember when cheating in school meant sneaking in a crib sheet or glancing at a neighbor's test? Teachers today likely look back on those days with nostalgia. Now, they face a more formidable challenge in maintaining academic honesty: generative AI. Unfortunately for educators, a foolproof solution is nowhere in sight.

Half of teachers report that generative AI has made them more distrustful of students' original work. And they have reason to be suspicious if anecdotal evidence is to be believed. Reports of cheating in colleges and high schools since the advent of generative AI have skyrocketed, causing despair and frustration in faculty rooms.

Quantifying the extent of this issue has not been easy. Turnitin, a plagiarism detection company, found that AI use was detected in only 10% of writing assignments reviewed over the past year, with just 3% being mostly AI-generated.

However, a Stanford University survey suggests that 60-70% of high school students admitted to cheating since the introduction of AI tools like ChatGPT.

Historically, cheating is not a novel issue. Studies have shown that more than half of high school and college students have engaged in some form of academic dishonesty. The International Center for Academic Integrity reported that nearly one-third of undergraduates admitted to cheating on exams as of early 2020.

Meanwhile, there is an arms race developing between AI-generated content and detection technologies, and for the moment, the former is winning.

For instance, OpenAI has experimented with embedding digital watermarks in its output to identify AI-generated text. However, these watermarks can be tampered with, and detectors can only identify those created by specific AI systems. This may explain why OpenAI has not released its watermarking feature, as it could drive users to services without such markers.

Other innovative approaches have been attempted. Researchers at Georgia Tech developed a system to compare students' responses to essay questions before and after the advent of ChatGPT. PowerNotes, a company integrating OpenAI services into Google Docs, allows instructors to track AI-generated changes in documents. However, all of these efforts have proven to have limited effectiveness.

In response to these challenges, there is a growing recognition that educational institutions must adapt their teaching and assessment methods. John Warner, a former college writing instructor and author of the forthcoming book 'More Than Words: How to Think About Writing in the Age of AI,' suggests that faculty should update their teaching approaches.

He argues that the ease with which AI generates credible college papers is partly due to the rigid, algorithmic format of traditional assignments. Warner proposes that teachers reduce the scope of assignments, focusing on shorter, more specific prompts linked to useful writing concepts. For instance, students could be asked to craft a vivid paragraph, make a clear observation about their surroundings, or write a few sentences that turn a personal experience into a broader concept.

Granted, generative AI could also complete these assignments, but by making them more relevant to their lives, students may want to do the project on their own. It's worth a try, at least, because right now, schools are on the losing end of their battle with AI.

Read Entire Article