It’s been just three years since ChatGPT launched to huge popular acclaim. But already, the industry is looking ahead to the next big wave of innovation: agentic AI.
True to form, OpenAI is at the cutting edge again with its new ChatGPT agent offering, which promises “to handle tasks from start to finish” on behalf of its users. Unfortunately, with greater autonomy comes greater risk.
Senior Threat Researcher at Trend Micro.
The challenge for corporate IT and security teams will be to empower their users to harness the technology, without exposing the business to a new breed of threats.
Fortunately, they already have a readymade approach to help them do so: Zero Trust.
Huge gains but emerging risks
Agentic AI systems offer a major leap forward from generative AI (GenAI) chatbots.
While the latter create and summarize content reactively based on prompts, the former are designed to proactively plan, reason and act autonomously to complete complex, multi-stage tasks. Agentic AI can even adjust its plans on the fly when presented with new information.
It’s not hard to see the potentially huge productivity, efficiency and cost benefits associated with the technology. Gartner predicts that by 2029, it will “autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs”.
However, the same capabilities that make agentic AI so exciting for businesses should also be cause for concern.
Because there’s less oversight involved, malicious actors could attack and subvert the actions of an AI agent without raising any red flags for users.
Because it is capable of making decisions with irreversible consequences, like deleting files or sending emails to the incorrect recipient, more damage could be done unless safeguards are built in.
Additionally, because agents can plan and reason across numerous domains, there are more opportunities for adversaries to manipulate them, such as via indirect prompt injection. This could be done by simply embedding a malicious prompt in the webpage an agent visits.
Because agents integrate deeply into the broader digital ecosystem, there’s more potential to breach highly sensitive accounts and information. And because they can accrue deep knowledge of their users’ behavior, there are potentially significant privacy risks.
Why access controls matter
Tackling these challenges needs to start with identity and access management (IAM). If organizations are going to create a de facto digital workforce comprised of agents, they need to manage the identities, credentials and permissions needed to do that work.
Yet most agents today are more generalists than specialists. ChatGPT agent is a great example: it can schedule meetings, send emails, interact with websites and much more.
This flexibility is what makes it a powerful tool. But it also makes it harder to apply traditional access control models, which were built around human roles with clear responsibilities.
If a generalist agent was manipulated via an indirect prompt injection attack, its overly permissive access rights could be a weakness, giving an attacker potentially broad access to a range of sensitive systems. That’s why we need to rethink access controls for the agentic AI era. In short, we need to follow the Zero Trust mantra: “never trust, always verify”.
Zero Trust reimagined
How does Zero Trust look in an agentic AI environment? First, assume agents will perform unintended and difficult-to-predict actions—something even OpenAI acknowledges. And stop thinking of AI agents as extensions of existing user accounts. Instead, treat them as separate identities, with their own credentials and permissions.
Access management should be enforced both at the agent level and at the tool level—ie governing which resources agents can access. More fine-grained controls like this will ensure permissions remain aligned with each task.
Think of it as “segmentation”, although not in the traditional Zero Trust sense of network segmentation
Instead, here we’re looking at restricting agent permissions so that they are only able to access the systems and data they need to do their job, and no more. In some situations, it may also be useful to also apply time-bound permissions.
Next, multifactor authentication (MFA). Unfortunately, traditional MFA doesn’t translate well to agents. If an agent is compromised, asking it for a second factor adds little security.
Instead, human oversight can serve as a second layer of verification, especially for high-risk actions. This must be balanced against the risk of consent fatigue: if agents trigger too many confirmations, users may start approving actions reflexively.
Organizations also need visibility into what agents are doing. So set up some kind of system for logging their actions and monitoring for unusual behavior. This also mirrors a key element of Zero Trust and will be essential for both security and accountability. It’s still early days for Agentic AI.
But if organizations want to embrace the technology’s ability to take actions with minimal oversight, they need to be confident that risk is appropriately managed. The best way to do that is by never trusting anything by default.
We've featured the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro








English (US) ·