States Want to Regulate AI. Why Congress May Push Back

3 days ago 3

States wouldn't be able to enforce their own regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives.

The legislation, to be considered Tuesday by the House Energy and Commerce Committee, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years.

The proposal would need the approval of both chambers of Congress and President Donald Trump before it becomes law.

AI Atlas

AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and requirements across the country that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene at the end of 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper.

"We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April congressional hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards."

Efforts to limit states' ability to regulate the technology could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. 

"There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level, too. I think we need both."

States have already started regulating AI

The proposed language would bar states from enforcing any regulation, including those already on the books. There are exceptions -- rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things would be OK.

These kinds of regulations are already starting to pop up. The biggest focus isn't in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. 

Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues like deepfakes

So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. 

In the April House committee hearing, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said.

What a moratorium on state regulation of AI would mean

AI developers have asked for any guardrails placed on their work to be consistent and streamlined. In a hearing by the Senate Committee on Commerce, Science and Transportation last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. 

Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences."

(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 

Consumer advocates say more regulations are needed, and hampering states' ability to do so could hurt the privacy and safety of users. 

"AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact."

Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on AI. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. 

"It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said.

Read Entire Article