Open Microsoft Edge today and a small Copilot icon waits in the corner. Click it and the browser can summarize a page, translate a paragraph, or draft an email.
Google is adding similar capabilities to Chrome with Gemini, while the less well-known Arc and Dia are developing models that can read, reason, and act for users. This marks a new chapter for the browser, powered by agentic AI.
Security Engineering Manager at Kocho.
These tools are turning the browser into an intelligent assistant. Yet while we read the neat paragraph the assistant returns, something else may be happening unseen. Hidden text, an image tag, or an advert could contain instructions the AI follows, quietly sending credentials or downloading malicious files.
Convenience before control
As with many advances in user-friendly technology, convenience arrives before control. The browser that makes our daily routines so much easier could just as easily be working against us.
It is worth considering how agentic browsers are becoming mainstream. These are browsers that are augmented with large-language model assistants that can interpret and act on web content. They offer sophisticated summarization and translation, enhanced research and the automation of workflows directly inside the browser.
Microsoft Edge Copilot is already familiar. Its Copilot Vision feature looks at the user’s screen and scans and analyses its content, offering suggestions.
Google has integrated Gemini in Chrome, while ARC has introduced a Browse for Me feature that scours the web, reads multiple pages and “builds the perfect tab”. It’s a feature of ARC Search, which the company says is still in its early stages.
Earlier this year, Brave announced its Brave Search API with ‘AI Grounding’, a feature it says reduces hallucinations.
As these tools move into the mainstream, new features are appearing at speed. Microsoft Edge’s October 2025 beta update – which added tab search and desktop visual – is just one recent example.
Why AI-enabled browsers can leave organizations exposed
Many features in agentic AI-enabled browsers are attractive to senior executives who have visions of greater efficiency, reduced headcount and faster research capabilities.
The problem is that there will be a rush to implement agentic AI browsers with security concerns left for later – repeating a familiar pattern from IoT, cloud and past technology cycles where adoption outran security.
This approach leaves organizations exposed. Agentic models give browsers a heavy degree of agency – the ability to make decisions and take actions on behalf of the user.
These browsers have cross-domain reach across email, cloud storage, SaaS apps and local files. With each new capability comes the potential for misuse.
How attackers can use agentic browsers
How is this likely to occur? It can begin when, for example, a user visits a perfectly normal website. The page may include adverts or third-party content. The user activates the browser’s in-built AI assistant to summarize the article or explain what is on the page.
This interaction triggers the large language model sitting behind the browser to read and interpret all available content.
What the user is unaware of is that an attacker has planted invisible text or metadata in the page. This could be white-on-white text, hidden HTML headers, cookies, ad code or code embedded in an image. Invisible to the human eye, it is just more data as far as the AI model is concerned.
This hidden text or code may instruct the model to log into the user’s email, compose an email and send the session token or password to a specific address.
Credential and data theft without a trail
Acting on these instructions, the model will accomplish credential theft, data exfiltration or file execution on behalf of the attacker.
If a user has admin rights, the injected command can go further by for example, downloading a file, renaming it and then executing it. This instantly pulls the endpoint into a botnet or opens it to remote control.
This is a significant threat because it generates no obvious indicators of compromise – no PowerShell, malware binary or exploit chain.
Endpoint detection and response technology (EDR) or antivirus will see everything as legitimate and even the website owner may be unaware that malicious code has been served through their ad network.
Ad platforms are often equally unaware, as there is no obfuscated code and no signature to match. This kind of attack is worrying and far from hypothetical. Brave Software, for example, claims to have found similar prompt injection vulnerabilities in Perplexity AI and Fellou.
Behavioral analytics picks up the clues
Although there may appear to be a major detection-gap with these threats, the good news is there is behavioral fallout that when correlated, reveals compromise.
The signs include users sending messages they have never sent before, the uploading of large or unlabeled files, the appearance of new mailbox rules and the appearance of plaintext passwords in outbound emails.
Behavioral analytics solutions will pick up these indicators within minutes. Proofs-of-concept can be replicated in lab condition. Many security operations centers (SOCs) are still playing catch-up, however.
They need to get on top of these threats quickly because of their potential seriousness. AI-assistants can act maliciously across multiple identities and systems simultaneously and it is difficult for teams to work out who is responsible for an AI-initiated action.
There is the constant danger of employees using unvetted AI plug-ins and personal copilots in their work. Developers may use agentic CLIs in their work too, increasing the risk of importing compromised packages.
Managed SOC support for smaller organizations
Smaller organisations will almost certainly need managed SOC support to counter these threats. Detection and governance need to move from signature-based detection to behavior-based detection.
Teams need the tools to intercept exfiltration, and they must be able to correlate the anomalies. Planning is required to automate containment. Given the human element in this, it is important for organisations to lay down an AI-use policy and to define approved browsers and extensions.
Developers, whether they like it or not, must be under supervision to enforce signed packages and private registries.
We are now in the era of agentic browsers, and they will prove hugely valuable tools, but given these emerging threats, adoption must be disciplined and accompanied by a significant change in security posture.
Control, enhanced monitoring and behavioral insight are necessary to maximize security as well as productivity and creativity.
We've featured the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro






English (US) ·