Librarians Aren’t Hiding Secret Books From You That Only AI Knows About

3 days ago 8

Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources. But for the folks tasked with helping the public find books and journal articles, the fake AI bullshit is really taking its toll. Librarians sound absolutely exhausted by the requests for titles that don’t exist, according to a new post from Scientific American.

The magazine spoke with Sarah Falls, the chief of researcher engagement at the Library of Virginia, who estimates that about 15% of all emailed reference questions that they receive are generated by AI chatbots like ChatGPT. And the requests often include questions about fake citations.

What’s more, Falls suggests that people don’t seem to believe librarians when they explain that a given record doesn’t exist, a trend that’s been reported elsewhere like 404 Media. Many people really believe their stupid chatbot over a human who specializes in finding reliable information day in and day out.

A recent post from the International Committee of the Red Cross (ICRC) titled, “Important notice: AI generated archival reference,” provides more evidence that librarians are just exhausted with it all.

“If a reference cannot be found, this does not mean that the ICRC is withholding information. Various situations may explain this, including incomplete citations, documents preserved in other institutions, or— increasingly—AI-generated hallucinations,” the organization said. “In such cases, you may need to look into the administrative history of the reference to determine whether it corresponds to a genuine archival source.”

The year seems to have been filled with examples of fake books and journal articles created with AI. A freelance writer for the Chicago Sun-Times generated a summer reading list for the newspaper with 15 books to recommend. But ten of the books didn’t exist. The first report from Health Secretary Robert F. Kennedy Jr.’s so-called Make America Healthy Again commission was released in May. A week later, reporters at NOTUS published their findings after going through all of the citations. At least seven didn’t exist.

You can’t blame everything on AI. Papers have been retracted for giving fake citations since long before ChatGPT or any other chatbot came on the scene. Back in 2017, a professor at Middlesex University found at least 400 papers citing a non-existent research paper that was essentially the equivalent of filler text.

The citation:

Van der Geer, J., Hanraads, J.A.J., Lupton, R.A., 2010. The art of writing a scientific article. J Sci. Commun. 163 (2) 51-59.

It’s gibberish, of course. The citation seems to have been included in many lower quality papers—likely due to laziness and sloppiness rather than an intent to deceive. But it’s a safe bet that any authors of those pre-AI papers would have probably been embarrassed about their inclusion. The thing about AI tools is that too many humans have come to believe our chatbots are more trustworthy than humans.

As someone who gets lots of local history queries, can confirm there’s been a big increase in people starting their history research with GenAI/LLM (which just spews out fake facts and hallucinated rubbish) who then wonder why they can’t find anything at all to corroborate it.

[image or embed]

— Huddersfield Exposed (@huddersfield.exposed) December 9, 2025 at 2:28 AM

Why might users trust their AI over humans? For one thing, part of the magic trick that AI pulls is speaking in an authoritative voice. Who are you going to believe, the chatbot you’re using all day or some random librarian on the phone? The other problem might have something to do with the fact that people develop what they believe are reliable tricks for making AI more reliable.

Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output. If that actually worked, we imagine companies like Google and OpenAI would just add that to every prompt for you. If it does work, boy, have we got a lifehack for all the tech companies currently terrified of the AI bubble bursting.

Read Entire Article