Gemini 2.0 doubles the speed of the AI assistant –and could supercharge search

1 week ago 3
Gemini 2.0
(Image credit: Google)

Google has unveiled the next iteration of the Gemini AI model family, starting with the smallest version, Gemini 2.0 Flash. Gemini 2.0 promises faster performance, sharper reasoning, and enhanced multimodal capabilities, among other upgrades, as it is integrated into Google's various AI and AI-adjacent services.

The timing for the news may have something to do with wanting to step on OpenAI and Apple's toes with their respective 12 Days of OpenAI and new Apple Intelligence news this week, especially since Gemini 2.0 is mostly built around experiments for developers. Still, there are some immediate opportunities for those on the non-enterprise side of things. Specifically, Gemini Assistant users and those who see AI Overviews when using Google Search will be able to engage with Gemini 2.0 Flash.

If you interact with the Gemini AI through the website on a desktop or mobile browser, you can now play with Gemini 2.0 Flash. You can pick it from the list of models in the drop-down menu. The new model is also on its way to the mobile app at some point. It may not be life-changing, but Gemini 2.0 Flash’s speed at processing and generating content is notable. It's far faster than Gemini 1.5 Flash; Google claims the new model will react at twice the speed while still outperforming even the more powerful Gemini 1.5 Pro model.

Overviews and future news

Google is infusing Gemini 2.0 into its AI Overviews feature as well. AI Overviews already write summaries to answer search queries on Google without requiring clicking on websites. The company boasted that there are more than a billion people who have seen at least one AI Overview since the feature debuted and that it has led to engagement with a wider array of sources than usual.

Incorporating Gemini 2.0 Flash has made AI Overviews even better at tackling complicated, multi-step questions, Google claims. For example, say you’re stuck on a calculus problem. You can upload a photo of the equation, and AI Overviews will not only understand it but walk you through the solution step-by-step. The same goes for debugging code. If you describe the issue in a search submission, the AI Overview might produce an explanation for the issue or even write a corrected version for you. It essentially bakes Gemini's assistant abilities into Google Search.

Most of the Gemini 2.0 news centers around developers, who can access Gemini 2.0 Flash through the Gemini API in Google AI Studio and Vertex AI; there’s also a new Multimodal Live API for those who want to create interactive, real-time experiences, like virtual tutors or AI-driven customer service bots.

Ongoing experiments for developers that may lead to changes for consumers are also getting a boost from Gemini 2.0, including universal AI assistant Project Astra, browser AI task aide Project Mariner, and partnerships with game developers to improve how in-game characters interact with players.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

It's all part of Google's ongoing effort to put Gemini in everything. But, for now, that just means a faster, better AI assistant that can keep up, if not outright, beat ChatGPT and other rivals.

You might also like

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Read Entire Article