Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Researchers found a high-severity bug in Chrome's Gemini feature.
- It grants extensions the ability to spy on you or steal your data.
- Update now.
A new vulnerability impacting Google Chrome's Gemini agentic AI feature has been disclosed -- patch now to stay protected.
Also: AI agents are fast, loose, and out of control, MIT study finds
Disclosed by senior principal security researcher Gal Weizman from Palo Alto Networks' Unit 42 team, the browser vulnerability affects Google Chrome's Gemini AI feature, an artificial intelligence (AI) agentic browser assistant.
The vulnerability, explained
Tracked as CVE-2026-0628 and deemed high severity, the vulnerability is described as an "insufficient policy enforcement in WebView tag in Google Chrome" issue that, prior to version 143.0.7499.192 of the browser, "allowed an attacker who convinced a user to install a malicious extension to inject scripts or HTML into a privileged page via a crafted Chrome Extension."
Also: Why scammers say nothing when they call - and how to respond safely
The team found that an extension with access to a basic permission set, via the declarativeNetRequests API, could grant permissions that an attacker could exploit to inject JavaScript code into the new Gemini panel browser component.
According to the researchers, this vulnerability can be used as part of a broader attack chain targeting Google Chrome users.
If, for example, an attacker can convince a target to download and install an innocent-looking browser extension, a malicious extension could exploit the policy problem to hijack Gemini. The AI assistant may then take action without permission, including granting a cybercriminal access to webcams and microphones, taking screenshots, and accessing local files and directories. The panel could also be hijacked for phishing purposes.
"Since the Gemini app relies on performing actions for legitimate purposes, hijacking the Gemini panel allows privileged access to system resources that an extension would not normally have," the researchers said.
How to stay safe
Following Palo Alto Networks' private disclosure to Google in October last year, it's been up to the team to triage, replicate, and patch the bug.
Included in Google Chrome's January patch notes, the Chrome security team developed a fix and included it in the 143.0.7499.192/.193 Windows and macOS stable channels (143.0.7499.192 for Linux). There have been additional security patches rolled out since, including fixes for vulnerabilities such as out-of-bounds bugs.
Also: Destroyed servers and DoS attacks: What can happen when OpenClaw AI agents interact
The best advice is simple: as soon as you see an alert that a new version of Chrome is available -- usually to the right of your address bar on desktop -- accept the update. Not only can you benefit from performance improvements and new features, but the patches included in these releases will mitigate the risk of your browser and data becoming compromised.
Agentic browser security - what's the big deal?
Agentic browsers may very well be the benchmark for browser experiences in the future. However, while they are still being developed, we are encountering new cybersecurity challenges that are increasing the attack surface and placing our privacy and data at risk.
Agentic AI features, typically framed through messaging and chat windows, aim to provide value by answering our queries, sourcing information on our behalf, filling out forms for us, and helping us manage our workflows. But the security ramifications of giving often untried, untested, and insecure AI-driven tools the keys to online accounts and services -- not to mention the power to act on our behalf -- have caused a security nightmare for defenders.
Why are they worried? Aside from the usual vulnerabilities, disclosures, and patch management that are necessary for software today, AI browsers and agents can be susceptible to prompt-injection attacks. Malicious instructions can be hidden in source material and websites, which then hijack these tools, forcing them to hand over a user's sensitive information, conduct surveillance, and perform all manner of illicit activities.
There's also the question of trust. Just how much should you trust an AI system with personal data, and if it were exposed or leaked, what would the ramifications be for you?
A recent MIT study found serious gaps in the "fast and loose" agentic AI development race regarding security testing, and so such technologies must be treated with caution.
Also: This new phone scam has 'carriers' calling to exchange your device - don't fall for it
We've yet to see the full security risks that agentic AI will pose, but we also have yet to see its true potential. Managing its benefits while balancing risk will be the true challenge -- and this applies to both consumers and businesses.
"Innovation can't come at the expense of security," commented Anupam Upadhyaya, SVP, Product Management, Prisma SASE, Palo Alto Networks. "If organizations choose to deploy agentic browsers, they must treat them as high-risk infrastructure, with runtime visibility, enforced policy controls, and hardened guardrails built in from day one. Anything less invites compromise."










English (US) ·