Despite its best efforts, the Trump administration has been unable to implement a moratorium preventing states from passing laws regulating AI companies. Thus far, most states have used their authority to create guardrails that AI firms must comply with. But in Illinois, OpenAI has thrown its weight (and lobbying budget) behind a bill that would grant it legal protection from large-scale harm.
Unfortunately for it, another frontier AI lab has put its thumb on the other side of the scale. According to a report from Wired, Anthropic has also decided to get involved in local politics and is lobbying against the bill that OpenAI has been pushing for.
The bill at the center of the power struggle between AI giants is Senate Bill 3444, the Artificial Intelligence Safety Act. The legislation was authored by Democratic Senator Bill Cunningham, and while the incredibly generic name would make one think that the goal is to establish safety standards for AI, the law would actually offer safety to AI companies that might face litigation. Effectively, it would offer frontier AI companies a legal shield preventing them from being held responsible for large-scale harms caused by their AI models, including death or serious injury of 100 or more people or at least $1 billion in property damage.
OpenAI has been trying to get out in front of laws that would create any additional burden on AI companies—a policy that has almost certainly been hastened by the fact that the company has been subject to several wrongful death lawsuits from families who lost a family member to suicide following conversations with ChatGPT. The company also publicly backed a piece of AI safety legislation in California that, while it added transparency requirements for frontier model makers, did not implement any liability laws that the companies could face. The legislation in Illinois goes a step further than just not establishing liability risk, but actually shields companies from it.
Per Wired, Anthropic has taken issue with that approach and has been working behind the scenes to either alter or kill the bill entirely. “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,” Cesar Fernandez, Anthropic’s head of US state and local government relations, told the publication.
Anthropic has been much more aggressive than OpenAI in advocating for stricter safety standards for AI companies. The two companies were previously on opposite ends of an AI safety bill in California (OpenAI eventually offered its support for that law, but only after it was pretty clear it was going to pass). Anthropic is backing a competing AI safety bill in Illinois, SB 3261, that would, among other things, require AI firms to create public safety and child protection plans that could be audited to determine their effectiveness.
While some of Anthropic’s pro-safety positions come down to marketing, the idea that AI companies should at least be subject to some scrutiny if someone were to, let’s say, use an AI model to develop a chemical weapon, does not exactly seem like a radical act of self-flagellation. It seems like a pretty reasonable expectation of accountability, and it seems particularly wild that a company like OpenAI would express concerns over the existential threats posed by the development of its technology while also pushing to not be liable should any of those doomsday outcomes come to fruition.
We’ve moved beyond the “At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus” stage of AI to the “we are issuing our support for the No One Is Responsible For The Harms Of The Torment Nexus Act.”







English (US) ·