Anthropic has issued a statement refusing to lower guardrails for Claude at the request of the Department of Justice. The Pentagon gave Anthropic until Friday to comply or have its $200m contract cancelled, along with more serious repercussions like designating it a supply-chain risk. The company let the time run out, and its CEO Dario Amodei, has now said that it "cannot in good conscience" accept the DoD's demands.
There are two main points of contention — mass domestic surveillance and fully autonomous weapons — that Anthropic has taken a concrete stance on. It argues that monitoring American citizens at large is inherently undemocratic and undermines individual liberty. The company adds that AI-led surveillance is dangerous and only allowed because the legal precedent has not yet caught up.
The company goes on to mention that Frontier AI is not ready to use fully autonomous weapons since it's incapable of human-like judgment. "We will not knowingly provide a product that puts America’s warfighters and civilians at risk," said Amodei. Partially unmanned weapons are "vital to the defense of the democracy," according to Anthropic, but AI cannot be trusted to select and kill targets on its own yet.
Anthropic offered to carry out R&D to improve reliability for these systems — where AI can be trusted enough to take control over automatically engaging subjects — but were turned down by the DoD. "[They] need to be deployed with proper guardrails, which don’t exist today," claimed Anthropic, referring to how no AI model can emulate an experienced trooper.
Both of these points are labeled as "exceptions" by Anthropic in its otherwise vocal support for working with the Pentagon. Throughout the statement, the outfit reiterates its desire to "continue to serve the Department and our warfighters—with our two requested safeguards in place."
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

2 days ago
13








English (US) ·