OpenAI is secretly fast-tracking 'Garlic' to fix ChatGPT's biggest flaws: What we know

1 day ago 6
Garlic illustration
Getty Images/Lazarevic photoworkshop

Follow ZDNET: Add us as a preferred source on Google.


ZDNET's key takeaways

  • OpenAI felt the squeeze of recent OpenAI and Anthropic releases.
  • CEO Sam Altman has reportedly initiated a "code red."
  • As a result, OpenAI is working on a new "Garlic" model. 

Following Google's release of Gemini 3, which quickly rose to the top of the LMArena AI leaderboard, OpenAI CEO Sam Altman informed employees that he was declaring a "code red." The aim was to further improve ChatGPT to better compete, according to a report by The Information. Now, a follow-up report from the publication reveals that the company is developing a new model in response, codenamed Garlic. 

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Also: Is DeepSeek's new model the latest blow to proprietary AI?

OpenAI's Chief Research Officer Mark Chen informed colleagues that Garlic has performed well in company evaluations compared to Gemini 3 and Anthropic's Opus 4.5 in tasks involving coding and reason, according to the report. This is important because both Gemini 3 and Anthropic Opus 4.5, released last month, set new industry standards, with the former leading in reasoning and the latter leading in coding. 

OpenAI did not immediately respond to a request for comment.

Chen also added that when developing Garlic, OpenAI addressed issues with pretraining, the initial phase of training in which the model begins learning from a massive dataset. The company focused the model on broader connections before training it for more specific tasks. These changes in pretraining enable OpenAI to infuse a smaller model with the same amount of knowledge previously reserved for larger models, according to Chen's remarks cited in the report. 

Smaller models can be beneficial for developers as they are typically cheaper and easier to deploy -- something French AI lab Mistral emphasized with its latest release this week. For the company behind it, a smaller model is cheaper to build and deploy. Garlic is not to be confused with Shallotpeat, a model Altman announced to its staff in October, according to a previous report, also from The Information, which also aimed to fix bugs in the pretraining process. 

Also: Amazon says new DevOps agents need no babysitting - you can try them here

As for when to expect the model, Chen kept the details vague, saying only "as soon as possible" in the report. However, given the context and OpenAI's urgent need to stay ahead, it would be safe to assume that the model could be released early next year. The developments made when creating Garlic have already allowed the company to move on to developing its next bigger and better model, Chen said. 

A battle for users

This fierce race between Google and OpenAI can be partially attributed to both vying for the same sector: consumers. 

As Anthropic's CEO, Dario Amodei, noted in conversation with Andrew Ross Sorkin during The New York Times' DealBook Summit on Wednesday, Anthropic isn't in the same race or facing a "code red" panic as its competitors, because it is focused on serving enterprises rather than consumers. The company just announced that its Claude Code agentic coding tool reached $1 billion in run-rate revenue, only six months after becoming available to the public. 

Read Entire Article