We should hone ‘responsible AI’ before Copilot goes autopilot

16 hours ago 7

Opinion by: Merav Ozair, PhD

2025 will be the “year of AI agents,” Nvidia CEO Jenson Huang predicted in November 2024. This will pave the way to a new era: the agentic economy

Huang described AI agents as “digital employees” and predicted that one day, Nvidia will have 50,000 human employees and over 100 million AI agents, and that every organization will probably see similar growth in AI workers.

But describing AI agents as “digital workers” is too simplistic and undermines the ramifications of this technology.

The AI agent evolution

We have approached technology as a tool, but agentic AI is more than a tool. Agentic AI goes beyond simply performing a single task — it presents a fundamental shift in how we interact with technology.

Unlike generative AI (GenAI), which depends on human instructions and cannot independently handle complex, multi-step reasoning or coordination, agentic AI uses networks of agents that learn, adapt and work together. AI agents can interact and learn from one another, and they possess the ability to autonomously make decisions, learn from experience, adapt to changing situations, and plan complex multi-step actions — effectively acting as a proactive partner rather than just a reactive tool to execute predefined commands.

Recent: Sam Altman expects first AI workers this year; OpenAI closer to AGI

Everyone and everything could have an AI agent that would work autonomously on their behalf. People could use them to assist in their daily or professional lives, while organizations could use them as assistants or workers or a network of workers. You could also imagine having an AI agent for an AI agent. Agentic AI applications are limitless and bound only by our imagination.

That is all very exciting, and the benefits could be immense, but so are the risks. AI agents, especially multi-agentic systems, would not only exponentially exacerbate many of the existing ethical, legal, security and other vulnerabilities we’ve experienced with GenAI but create new ones.

AI agents bring a new risk level

AI models are data-driven. With agentic AI, the need for and reliance on personal and proprietary data is increasing exponentially, as are the vulnerabilities and risks. The complexity of these systems raises all sorts of privacy questions.

Privacy

How do we ensure that data protection principles such as data minimization and purpose limitation are respected? How can we avoid personal data being leaked within an agentic AI system? Will users of AI agents be able to exercise data subjects’ rights, such as the right to be forgotten, if they decide to stop using the AI agent? Would it be enough to only communicate to “one” agent, expecting it to “broadcast” to the entire network of agents?

Security 

AI agents can control our devices, and we must examine the potential vulnerabilities of such agents if they run on our computer, smartphone or any IoT device.

If there are any security vulnerabilities, then it’s not going to be contained in one application that has been compromised. Your “entire life” — i.e., all your information on all your devices and more — would be compromised. That is true for individuals and organizations. Furthermore, these security vulnerabilities could “leak” to other AI agentic systems with which your “compromised” agent interacted.

Suppose one agent (or a set of agents) follows strict security guardrails. Suppose they interact with others (or a set of agents) that have been compromised — e.g., due to a lack of appropriate cybersecurity measures. How can we ensure that the compromised agents will not act as a “virus” to contaminate all agents they interact with?

The implications of such a scenario could be devastating. This “virus” could disseminate in milliseconds, and, potentially, entire systems could collapse across nations. The more complex and intertwined the connections/interactions, the higher the danger of collapse.

Bias and fairness

We have already seen examples of biased GenAI systems. In the context of AI agents, any existing bias will be transmitted through the task execution chain, exacerbating the impact.

How can we prevent discrimination or enforce legal provisions ensuring fairness when the bias is “baked” into the AI agent? How do we ensure that AI agents will not exacerbate existing bias built into a particular large language model (LLM)?

Transparency

People would want to be aware of an agent’s decision-making process. Companies must ensure that AI interactions are transparent and allow users to intervene when needed or opt out.

Accountability

In agentic systems and the chain of execution, how could we define accountability? Is it a specific agent? Or the agentic system? And what happens if agentic systems interact with each other? How could you build the appropriate traceability and guardrails?

We have not figured out yet how to address these issues in LLM and GenAI applications. How can we ensure we can secure something much more complex? Beyond those risks, there could be all sorts of societal harm on a global scale.

The need for an overarching, responsible AI

Legislators have not yet considered agentic AI systems. They are still wrestling with understanding how to guardrail LLMs and GenAI applications. In the age of the agentic economy, developers, tech companies, organizations and legislators need to reexamine the concept of “responsible AI.” 

Implementing AI governance and appropriate responsible AI measures per organization or application is not enough. The approach should be more holistic and overarching, and international collaboration on safe, secure agentic AI might not be optional but rather a must.

Dr. Merav Ozair helps organizations implement responsible AI systems and mitigate AI-related risks. She is developing and teaching emerging technologies courses at Wake Forest University and Cornell University and was previously a fintech professor at Rutgers Business School. She is also the founder of Emerging Technologies Mastery, a Web3 and AI end-to-end (and responsible innovation) consultancy shop, and holds a PhD from Stern Business School at NYU.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Read Entire Article