Google shutters developer-only Gemma AI model after a U.S. Senator's encounter with an offensive hallucination

9 hours ago 2
Google Gemma
(Image credit: Google)

  • Google has pulled its developer-focused AI model Gemma from AI Studio
  • The move comes after Senator Marsha Blackburn complained that it falsely accused her of a criminal act
  • The incident highlights the problems of both AI hallucinations and public confusion

Google has pulled its developer-focused AI model Gemma from its AI Studio platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN) that the model fabricated criminal allegations about her. Though only obliquely mentioned by Google's announcement, the company explained that Gemma was never intended to answer general questions from the public, but after reports of misuse, it will no longer be accessible through AI Studio.

Blackburn wrote to Google CEO Sundar Pichai that the model’s output was more defamatory than a simple mistake. She claimed that the AI model answered the question, “Has Marsha Blackburn been accused of rape?” with a detailed but entirely false narrative about alleged misconduct. It even pointed to nonexistent articles with fake links to boot.

“There has never been such an accusation, there is no such individual, and there are no such news stories,” Blackburn wrote. “This is not a harmless ‘hallucination.’ It is an act of defamation produced and distributed by a Google-owned AI model.” She also raised the issue during a Senate hearing.

Gemma is available via an API and was also available via AI Studio, which is a developer tool (in fact to use it you need to attest you're a developer). We’ve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this…November 1, 2025

Google repeatedly made clear that Gemma is a tool designed for developers, not consumers, and certainly not as a fact-checking assistant. Now, Gemma will be restricted to API use only, limiting it to those building applications. No more chatbot-style interface on Google Studio.

The bizarre nature of the hallucination and the high-profile person confronting it merely make the underlying issues of how models not meant for conversation are being accessed, and how complex these kinds of hallucinations can get. Gemma is marketed as a “developer-first” lightweight alternative to its larger Gemini family of models. But usefulness in research and prototyping does not translate into providing true answers to questions of fact.

Hallucinating AI literacy

But as this story demonstrates, there is no such thing as an invisible model once it can be accessed through a public-facing tool. People encountered Gemma and treated it like Gemini or ChatGPT. As far as most of the public might perceive matters, the line between “developer model” and “public-facing AI” was crossed the moment Gemma started answering questions.

Even AI designed for answering questions and conversing with users can produce hallucinations, some of which are worryingly offensive or detailed. The last few years have been filled with examples of models making things up with a ton of confidence. Stories of fabricated legal citations and untrue allegations of students cheating make for strong arguments in favor of stricter AI guardrails and a clearer separation between tools for experimentation and tools for communication.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

For the average person, the implications are less about lawsuits and more about trust. If an AI system from a tech giant like Google can invent accusations against a senator and support them with nonexistent documentation, anyone could face a similar situation.

AI models are tools, but even the most impressive tools fail when used outside their intended design. Gemma wasn’t built to answer factual queries. It wasn’t trained on reliable biographical datasets. It wasn’t given the kind of retrieval tools or accuracy incentives used in Gemini or other search-backed models.

But until and unless people better understand the nuances of AI models and their capabilities, it's probably a good idea for AI developers to think like publishers as much as coders, with safeguards against producing blaring errors in fact as well as in code.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Purple circle with the words Best business laptops in white

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Read Entire Article