Why AI chatbots make bad teachers - and how teachers can exploit that weakness

3 hours ago 10
greenaitutor-gettyimages-1009465530
SEAN GLADWELL/Getty Images

ZDNET's key takeaways

  • ChatGPT Study Mode offers little to distinguish it from normal ChatGPT.
  • A Study Mode user must work hard to make the lesson interesting and rewarding.
  • Developers and educators should push AI to stimulate students' curiosity.

In the nearly three years since OpenAI's ChatGPT burst onto the scene, the use of artificial intelligence has invaded not only daily work and leisure but also the field of education

The Pew Research Center reports that a quarter of US adults consult a bot to learn something, up from 8% in 2023 -- almost as many as use them for work. The percentage is higher the higher one's level of education is, interestingly. 

That surge in usage had put teachers and professors in a quandary about students' propensity to use bots to get answers to questions rather than think through problems deeply. Pew reports that a large number of teachers fear a crisis in education

Also: ChatGPT's study mode could be your next tutor - and it's free

Somewhat more nuanced, the scholarly journal Daedalus, in its issue on trends in education, last year concluded that "We cannot predict how education will be affected in the long term by large language models and other AI-supported tools, but they hold the possibility to both promote and distort current approaches to teaching and learning."

So, what are educators -- and AI developers -- to do if the world has found a way around traditional education?

OpenAI's answer to the question is a new feature introduced to ChatGPT last week, Study Mode, which my colleague Sabrina Ortiz explored. As Sabrina relates, Study Mode will respond to a prompt with a plan of study and ask questions about goals. 

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Also: How ChatGPT actually works (and why it's been so game-changing)

How I used Study Mode to try to learn a language

I tested Study Mode as a way to learn a new language. I chose that activity because I've already used ChatGPT for a year now to try to study languages, which gave me a basis for comparison. 

In my experience, Study Mode adds very little to my efforts to learn a language. The differences with plain old ChatGPT are minor. And I had to make many efforts to steer Study Mode in the right direction. 

The lesson here is clear. As with all language models, you only get out of it what you put into it. You still have to craft good prompts or else you're left with something very general and not very interesting, in my opinion. That's true of Study Mode just as it is of other large language model-based programs.

In other words, education in Study Mode results from the student's effort rather than the teacher's brilliance.

Also: How to turn AI into your own research assistant with this free Google tool

As a basic comparison, I asked Study Mode to help me learn to read and write in Japanese, a language I do not know. 

My prompt was: "I'd like to learn to read and write Japanese even though I'm an absolute beginner!" 

The results in Study Mode were nearly identical to those of ordinary ChatGPT. Study Mode began with a couple of questions about how I would like to proceed, while regular ChatGPT simply responded with its proposed lesson plan. In the accompanying screenshot, Study Mode is on the left, and ordinary ChatGPT is on the right.

chatgpt-for-japanese-language-study-august-11th-2025.png
Screenshot from ChatGPT by Tiernan Ray/ZDNET

While it's not a bad idea to ask me about how I would like to proceed, as an absolute beginner, it's rather pointless because I don't know anything about what I'm going to learn. That's sort of the point of the teacher role. 

In both cases, ChatGPT suggested that we proceed by learning the basic characters of the native Japanese phonetic alphabet, the hiragana. We proceeded row by row, with me trying to repeat the hiragana that ChatGPT gave me.

At one point, it became clear to me that simply learning hiragana one by one was not going to work. After about half an hour, I refused ChatGPT's suggestions to continue along the way we had been going, and instead asked the bot to give me many examples of real words using the characters I had already learned. This started to help me cement my knowledge of the characters. 

Also: I used ChatGPT's Study Mode to tutor me for free - and you can too

None of this stimulated my curiosity very much with its rote exercises. To make it more interesting for myself, I prompted Study Mode, "How many hiragana are there altogether?" 

This was an example of a desire to understand the larger scope of the subject matter, and it only happened because I asked. ChatGPT's response was a nice explanation of the total number of hiragana. If I hadn't asked, I wouldn't have gotten such an interesting diversion. 

That's exactly the point. Without a suggestion from me, the bot didn't have great ideas for how to move forward. As Sabrina points out in her introductory article, Study Mode relies a lot on the "Socratic method" of question and answer. However, in the realm of AI, the enterprising user often has more interesting questions than the bot. 

Also: GPT-5 bombed my coding tests, but redeemed itself with code analysis

Why LLMs aren't very imaginative teachers

That shouldn't be surprising. ChatGPT in study mode has been shaped to conform to the most common approaches to things. All language models tend to stay within what's likely, or highly probable, from moment to moment, which may be appropriate for reviewing material for a test but is not stimulating for a learner of anything.

That is one of the reasons that ChatGPT and other bots are regularly acing the kinds of standardized tests where US students increasingly fail: The program has mastered the routine, the regurgitation of rote information.

It's pretty clear that the bot lacks a higher level of understanding of what it means to teach someone, namely, what educators call a curriculum

A curriculum is a high-level understanding of how students learn and how to move through material -- documents, examples, etc. -- in a way that will not simply give answers but rather evoke the student's own ability to ask questions. 

Also: Has ChatGPT rendered the US's education report card irrelevant?

In a good education, after all, one ends up with more questions than answers. 

Of course, we bot users aren't experts in curriculum, either. That's why we generally offer mostly the same prompts: "Explain to me…", "Tell me the reason why…", and "Help me learn X." 

As users, we're stuck when we don't know what to ask next. 

In the past year of on-and-off study, I haven't stuck with ChatGPT as regularly as I should if I really want to learn a language. As the novelty wore off, my resolve waned.

Let's change how we think of bots for education

There's a clear message for OpenAI and other AI developers here. Both Study Mode and normal ChatGPT are too much shaped to produce a kind of common ground in the typical exchange without any real sense of how to bring a student to the point of asking questions that open up their desire to learn more. 

Also: How AI-enabled autonomous business will change the way you work forever

There's little innovation here, a lot of rote lesson plans. 

There's also a message for stressed-out teachers and professors here. It's natural for people to reach out for answers. If students are going to go to bots for answers, and they certainly are, then probably the right approach is to help them find ways to create more questions out of the bot rather than playing the cop and trying to prevent them from using bots. 

Why not flip things around? Why not help the students push the bot to the point where an entire topic has become sufficiently complex that the bot returns with more and more questions rather than simply providing answers like an authority? 

It might even be a group activity, where the teacher recedes from the leadership role and gets students to take the lead, to hash out how they are going to push the bot into all the regions of uncertainty. 

Also: College students can get Google's AI Pro plan for free now. Here's how

Think of that as The New Curriculum, or, with a nod to today's programming methods, "Vibe Pedagogy," a way to hack the bot to get to something more interesting, more stimulating. 

Education is only at risk from AI if the teacher is assumed to be the final authority. If, instead, education is viewed as finding out just how much there is to know, how many open questions there are in a field of study, then there is no danger in students using technology to open up more and more questions. 

Let's learn about the machine along the way

It's also a good way to integrate the study of a topic, such as the American Revolution, with the study of the machine itself, the bot. Students are likely to spend the rest of their adult lives interacting with the bot in some fashion. Get to know the bot -- its strengths and limitations.

As mentioned above, the bots have aced all the standardized exams. There is no point in forcing humans to endure dull regurgitation of the facts. Much better would be to stimulate curiosity and question-asking, which humans still do better than bots. 

Also: Anthropic launches Claude for Education, an AI to help students think critically

Because I asked how many hiragana characters there were, I decided to ask Study Mode a "meta" question, like a pesky child: Why are there only 76 Hiragana? It gave me a nice answer that didn't really answer the question but was more of a description:

Japanese phonology is syllable-based and limited. It doesn't have all the consonant clusters or vowel changes found in English or many other languages, so it doesn't need hundreds of symbols -- just a neat, consistent set.

It is a sort of answer, but it doesn't really answer, leaving the question open and intriguing. 

Read Entire Article