In May 2025 alone, DataDome detected 976 million requests from OpenAI-identified crawlers, and earlier this year saw request volume surge by 48% in just 48 hours following the launch of OpenAI’s Operator agent. Far from being an anomaly, these are clear signs of the ‘new normal’ of web traffic for businesses today.
While bots and crawlers have been part of the internet for years, AI-powered autonomous agents are a relatively new development. These agents - ranging from LLM crawlers to more sophisticated programs performing online tasks autonomously - are more persistent and much harder to classify than their simple bot ancestors. This introduces new challenges for fraud and security teams.
As AI agents make up a growing proportion of web traffic, organizations' security teams must take a different approach. Because faking identity is much easier than disguising intent, they must not only identify if a user is human or bot, but also ask why the user is interacting with their platform. This is where intent-based cybersecurity strategies come in.
Co-founder and Chief Strategy Officer at DataDome.
The tip of the bot iceberg
While our research spotted a huge surge in OpenAI crawler activity, this is only one data point in a much broader trend. Across our network, 36.7% of traffic today comes from non-browser sources, like APIs, SDKs, mobile apps, and a growing population of autonomous agents.
These AI-driven agents scrape, synthesize and simulate activity in ways that bypass traditional security defenses. For instance, many ignore the industry-standard robots.txt protocol, tripping up businesses that rely on this check to manage crawler access. Other agents mimic real user behavior to slip under the radar - not necessarily because they have malicious intent, but often just to avoid access restrictions.
Herein lies the challenge for security teams: not all AI agents are malicious, but many are ungoverned, and old school defense systems have no way of distinguishing between the two. This distinction is critical, not just in terms of blocking harmful activity like scraping or account abuse, but also facilitating beneficial use cases like LLM-powered search, content summarization, and API-driven integrations.
Moving past a binary approach
Traditional strategies rely on binary logic: allow or block. These methods depend on predefined rules, IP reputation lists, and static thresholds for rate limiting. While these approaches might work for rudimentary spam bots or simplistic crawlers, they aren’t effective against intelligent, dynamic agents that adapt in real-time.
If security teams block everything, they risk shutting out beneficial AI traffic… but if they let everything in, they open the door to fraud and data leakage. The smartest approach is one that’s informed by intent analysis.
Instead of focusing on what the traffic is, security teams need to start focusing on why the users are visiting their platforms.
An intent-based system constantly evaluates behavior and context to determine whether to allow, challenge, or block a request. For instance, if a user is accessing a retailer’s website during the launch of a limited edition product drop and only targeting the most high-value items again and again - rather than browsing the website organically as a genuine user would - this is a telltale sign of a scalper bot, and the behavior would be flagged as suspicious.
Or if an AI agent spams an airline’s website with thousands of price checks to scrape fare data, this might seem like normal browsing, but it can slow down the site and distort pricing for genuine customers. An intent-based system would flag the unusual scale behind this traffic, and block access before any damage is caused.
By drawing on behavioral signals, device intelligence, and real-time telemetry, intent-based defenses can differentiate between a legitimate AI agent and a malicious one.
Writing a new playbook
First, security teams need to rethink their foundations. The old processes for monitoring traffic no longer apply; teams need to re-audit their environments to understand where non-browser traffic is coming from, how it typically behaves, and what intent it serves.
Next, security teams need to move beyond static defense strategies, like rate limiting or blocklists, instead opting for an intent-based approach that can assess behavior in real time and make dynamic, intelligent decisions.
A clear access policy is also key. This means product, security, and legal teams must sit down and agree on which AI agents are welcome on their digital platforms, and under what conditions. Once these rules are defined, they should be enforced consistently across every platform.
The future of cybersecurity isn’t about stopping every bot - or trusting every human. It’s about understanding the ‘why’ behind every request.
We list the best website monitoring software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro