Perplexity's Comet AI browser could expose your data to attackers - here's how

4 hours ago 5
Perplexity Comet in agentic AI mode
Screenshot by Lance Whitney/ZDNET

ZDNET's key takeaways

  • Perplexity's Comet browser could expose your private data.
  • An attacker could add commands to the prompt via a malicious website.
  • The AI should treat user data and website data separately.

Get more in-depth ZDNET AI coverage: Add us as a preferred Google source on Chrome and Chromium browsers.


Agentic AI browsers are a hot new trend in the world of AI. Instead of you having to browse the web yourself to complete specific tasks, you tell the browser to send its agent to carry out your mission. But depending on which browser you use, you may be opening yourself up to security risks.

In a blog post published Wednesday, the folks behind the Brave browser (which offers its own AI-powered assistant dubbed Leo) pointed their collective fingers at Perplexity's new Comet browser. Currently available for public download, Comet is built on the premise of agentic AI, promising that your wish is its command.

Also: Why Perplexity is going after Google Chrome - and yes, it's serious

Do you need to pick up a new supply of your favorite protein drink at Amazon? Instead of doing it yourself, just tell Comet to do it for you.

OK, so what's the beef? First, there's certainly an opportunity for mistakes. With AI being so prone to errors, the agent could misinterpret your instructions, take the wrong step along the way, or perform actions you didn't specify. The challenges multiply if you entrust the AI to handle personal details, such as your password or payment information.

But the biggest risk lies in how the browser processes the prompt's contents, and this is where Brave finds fault with Comet. In its own demonstration, Brave showed how attackers could inject commands into the prompt through malicious websites of their own creation. By failing to distinguish between your own request and the commands from the attacker, the browser could expose your personal data to compromise.

Also: How to get rid of AI Overviews in Google Search: 4 easy ways

"The vulnerability we're discussing in this post lies in how Comet processes web page content," Brave said. "When users ask it to 'Summarize this web page,' Comet feeds a part of the web page directly to its LLM without distinguishing between the user's instructions and untrusted content from the web page. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user's emails from a prepared piece of text in a page in another tab."

To date, there are no known examples of such attacks in the wild.   

Brave said the attack demonstrated in Comet shows that traditional web security isn't enough to protect people when using agentic AI. Instead, such agents need new types of security and privacy. With that goal in mind, Brave recommended that several measures be implemented.

The browser should distinguish between user instructions and website content. The browser should separate the requests submitted by a user at the prompt from the content delivered at a website. With a malicious site always a possibility, this content should always be treated as untrusted.

The AI model should ensure that tasks align with the user's request. Any actions submitted to the prompt should be checked against those submitted by the user to ensure alignment.

Also: Scammers have infiltrated Google's AI responses - how to spot them

Sensitive security and privacy tasks should require user permission. The AI should always require a response from the user before running any tasks that affect security or privacy. For example, if the agent is told to send an email, complete a purchase, or log in to a site, it should first ask the user for confirmation.

The browser should isolate agentic browsing from regular browsing. Agentic browsing mode carries some risks, as the browser can read and send emails or view sensitive and confidential data on a website. For that reason, agentic browsing mode should be a clear choice, not something the user can access accidentally or without knowledge.

With Brave finding fault with Comet, how has Perplexity responded? Here, I'm just going to share the timeline of events as described by Brave.

  • July 25, 2025: Vulnerability discovered and reported to Perplexity.
  • July 27, 2025: Perplexity acknowledged the vulnerability and implemented an initial fix.
  • July 28, 2025: Retesting revealed the fix was incomplete; additional details and comments were provided to Perplexity.
  • August 11, 2025: One-week public disclosure notice sent to Perplexity.
  • August 13, 2025: Final testing confirmed the vulnerability appears to be patched.
  • August 20, 2025: Public disclosure of vulnerability details (Update: on further testing after this blog post was released, we learned that Perplexity still hasn't fully mitigated the kind of attack described here. We've re-reported this to them.)

Now, the ball is back in Perplexity's court. I contacted the company for comment and will update the story with any response.

Also: The best secure browsers for privacy: Expert tested

"This vulnerability in Perplexity Comet highlights a fundamental challenge with agentic AI browsers: ensuring that the agent only takes actions that are aligned with what the user wants," Brave said. "As AI assistants gain more powerful capabilities, indirect prompt injection attacks pose serious risks to web security."

Read Entire Article