Claude creator, Anthropic, has already had an eventful 2026—and now it's challenging the recently renamed US Department of War. The Pentagon has threatened to designate the company as a "supply chain risk," which Anthropic is currently fighting by suing the US government.
Additionally on Monday, Anthropic requested a restraining order that would pause the Pentagon's national security blacklisting of the company, arguing the court should hear the company's suit first before carrying out the government's order. The latest update today is that an amicus brief filing suggests Microsoft may back Anthropic's cause (via Reuters).
Article continues below
Microsoft requested to file its amicus brief in a federal court in San Francisco, but a judge still needs to approve its official inclusion in the case. This follows another similarly supportive filing on Monday from a group of 37 researchers and engineers hailing from OpenAI and Google.
Six months later, disagreements arose between Anthropic and the DOD regarding the implementation of safeguards in the company's LLMs—specifically the guardrails that would prevent these models from being deployed in autonomous weapon targeting and domestic surveillance scenarios.
Anthropic's AI safety lead quit shortly afterwards, issuing a bizarre resignation letter via X that includes the phrase "The world is in peril." Shortly after that Anthropic revised its stance on 'pausing' development of more powerful AI models if suitable safety safeguards weren’t yet ready, effectively ditching its defining safety promise.
Even so, Anthropic ultimately stood up to the DOD and refused to remove the aforementioned LLM safeguards. "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values," Anthropic CEO Dario Amodei wrote, going on to later add: "Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk."
Beyond threatening to blacklist Anthropic, the Pentagon has since responded by giving itself six months to phase out its use of the company's products. OpenAI has stepped in to fill the AI void, though CEO Sam Altman has said the company will be amending the language of its deal with the DOD.
Bottom line, OpenAI has recently drawn "three main red lines" in the sand that are suspiciously similar to Anthropic's original objections: "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')."
As yet no toys have come flying out of the government's pram in response to OpenAI's stance, but it is worth noting that the company's lead of robotics, Caitlin Kalinowski, resigned recently—after those 'red lines' were published. "This wasn’t an easy call," she says. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
So, all rather messy, and this is unlikely to be the last we'll hear about the AI industry's' legal wranglings with the US government.










English (US) ·