AI-powered browsers like ChatGPT Atlas aren’t just browsers with little ChatGPT picture-in-picture boxes off to the side answering questions. They also have “agentic capabilities,” meaning they can theoretically carry out tasks like buying airline tickets and making hotel reservations (Atlas hasn’t exactly gotten rave reviews as a travel agent). But what happens when the little web-crawling bot that does these tasks senses danger?
The danger we’re talking about is not to the user, but to the browser’s parent company. According to an investigation by Aisvarya Chandrasekar and Klaudia Jaźwińska of the Columbia Journalism Review, when Atlas is in agent mode, running all over the internet gobbling up information for you, it will take great pains to avoid certain sources of information. Some of that shyness appears to be connected to the fact that those sources of information belong to companies that are suing OpenAI.
These bots have more freedom than normal web crawlers, Chandrasekar and Jaźwińska found. Web crawlers are ancient internet technology, and in ordinary, uncontroversial circumstances, when a crawler encounters instructions to not crawl a page, it simply will not. If you’re using the ChatGPT app, and you ask it to fish specific nuggets of information out of articles that block crawlers, it will most likely obey, and report to you that it can’t do it, because that task relies on crawlers.
Agentic browser modes, however, use the internet under the pretense of being the you the user, and they “appear in site logs as normal Chrome sessions,” according to Chandrasekar and Jaźwińska (because Atlas is built atop the Google-designed open source Chromium browser). This means they generally can crawl pages that otherwise block automated behavior. Skirting the rules and norms of the internet in this way actually makes some sense, because to do otherwise might prevent you from manually accessing a given site in the Atlas browser, which sounds like overkill.
But Chandrasekar and Jaźwińska asked Atlas to summarize articles from PCMag and the New York Times, whose parent companies are in active litigation with OpenAI over alleged copyright violations, and it went way out of its way to accomplish this, carving labyrinthine paths around the internet to deliver some version of the requested information. It was like a rat finding food pellets in a maze, knowing that the locations of certain food pellets are electrified.
In the case of PCMag, it went to social media and other news sites, finding citations of the article, and tweets containing some of the article’s contents. In the case of the New York Times, it “generated a summary based on reporting from four alternative outlets—the Guardian, the Washington Post, Reuters, and the Associated Press.” All of those except Reuters have content or search-related agreements with OpenAI.
In both cases, Atlas appears to have journeyed far from litigious publications, favoring a safer, more AI-friendly path to the end of its little rat maze.








English (US) ·