OpenAI, Anthropic, and Others Create Foundation for Standardizing AI Agents

2 hours ago 9

OpenAI, Anthropic, and Block have teamed up to co-found a new foundation that promises to help standardize the development of AI agents.

The new Agentic AI Foundation (AAIF) will operate under the larger Linux Foundation, a non-profit that oversees several open-source projects including the Linux operating system.

In addition to establishing the foundation, each of the three companies donated some of their agent tech to the organization.

OpenAI handed over ownership of its AGENTS.md universal standard, which gives AI coding agents a consistent source of project-specific guidance across different platforms. Anthropic donated its Model Context Protocol (MCP), which provides a standard way to connect AI models to tools, data, and applications. And Block donated its open-source AI agent framework, Goose, which developers use to build AI agents.

“Within just one year, MCP, AGENTS.md and goose have become essential tools for developers building this new class of agentic technologies,” said Jim Zemlin, executive director of the Linux Foundation, in a press release. “Bringing these projects together under the AAIF ensures they can grow with the transparency and stability that only open governance provides.”

The foundation arrives as AI companies are attempting to move beyond simple chatbots into autonomous agents that can take actions on behalf of users, like booking reservations or shopping online. AAIF’s goal is to promote industry standards so that as more agents come online, they work securely, transparently, and seamlessly together.

But because the tech is still in its early days, researchers have already started pointing out the risks that come with using agents right now.

Last week, the analyst firm Gartner recommended that companies and organizations block their employees from using AI browsers for now. Its report defines an AI browser as a browser that includes an “AI sidebar” that can search, create summaries, and interact with webpages, and that has agentic transaction capabilities like allowing the browser to navigate, interact, and complete tasks on websites.

Gartner warned that AI sidebar features could expose sensitive user information, since they likely collect data regarding active web content, browser history, and open tabs.

The agentic capabilities of these browsers also face unique vulnerabilities. They can be susceptible to what are known as “indirect prompt-injection-induced rogue agent actions,” which occur when an agent comes across potentially malicious content that prompts it to ignore safety guardrails and execute unwanted financial transactions or expose sensitive data.

Just this week, Google introduced what it’s calling the User Alignment Critic, a separate AI model that runs alongside an AI agent but isn’t exposed to third-party content to circumvent this risk. The idea is for it to vet an agent’s plan and make sure it aligns with the user’s goals.

Gartner also warned that AI agents could simply make mistakes like booking the wrong flight or ordering the wrong number of an item.

Several other big names in AI have already joined as members of the foundation including Microsoft, AWS, and Cloudflare.

Read Entire Article