Anthropic sues Pentagon over 'supply chain risk' designation, citing free speech and due process violations — company refused to allow its AI to be used for autonomous attacks, mass surveillance

2 hours ago 5
Dario Amodei, Anthropic CEO, on stage during a conference. (Image credit: Getty Images/Bloomberg)

Anthropic has filed two federal lawsuits against the Pentagon and other U.S. federal agencies, seeking to overturn the Department of War's decision to designate the AI company a "supply chain risk," a label that blocks Pentagon suppliers and contractors from using its Claude models, and that national security experts say has historically been reserved for foreign adversaries.

Go deeper with TH Premium: AI and data centers

This dispute traces back to a contract renegotiation between Anthropic and the Department of War that collapsed in late February. The Pentagon wanted unrestricted access to Claude for "any lawful use," while Anthropic refused to remove two guardrails: a prohibition on using its models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. citizens.

Article continues below

Defense Secretary Pete Hegseth formally issued the supply chain risk designation on February 27; Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post that same day, with a six-month phase-out period.

The Pentagon argued that private companies cannot dictate how the government uses technology in national security scenarios, and that Anthropic's restrictions could endanger American lives. Anthropic countered that current AI models are not reliable enough for fully autonomous weapons deployment, and that domestic surveillance at scale would violate fundamental rights.

The fallout has had immediate competitive consequences, with OpenAI’s Sam Altman controversially striking a new Pentagon deal within hours of Anthropic's new designation. Altman publicly stated that the Dept. of Warshares OpenAI's principles on human oversight of weapons and opposition to mass surveillance. xAI, Elon Musk's AI company, is also understood to have since been cleared for use on classified Pentagon systems.

Anthropic was previously the first AI lab permitted to operate on the Dept. of War's classified networks, and signed a contract worth up to $200 million with the department in July 2025. The Wall Street Journal has previously reported that Claude had been used in military operations, including intelligence assessments and target identification in the U.S.'s ongoing conflict with Iran, even after the Pentagon ousted the model.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in its filing with the U.S. District Court.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

Read Entire Article