US Lawmakers Propose Ban on State-Level AI Regulation
The Proposal: Artificial Intelligence Federal Framework Act of 2025
A new legislative proposal in the U.S. House of Representatives has sparked significant debate regarding the regulatory landscape of artificial intelligence (AI) at the state level. The proposed bill would institute a 10-year moratorium on state regulations concerning AI, and the idea has raised tensions around the allocation of power between federal governance and state autonomy.
Titled the Artificial Intelligence Federal Framework Act of 2025, this legislation was introduced in late May. It seeks to prohibit state and local governments from enacting or enforcing any laws related to AI until at least 2035, unless specifically authorized by federal law. The bill defines AI quite broadly, covering various applications including generative models like OpenAI’s GPT-4, facial recognition systems, predictive policing technologies, and automated hiring tools, effectively establishing a national monopoly over AI regulations.
Proponents’ Perspective
Proponents of the bill argue that a unified federal approach is necessary to avoid a confusing and inconsistent “regulatory patchwork.” They express concerns that the current trajectory of state-level AI laws could stifle innovation and complicate interstate commerce. In 2023 alone, over 191 AI-related bills were proposed across 38 states, encompassing a range of issues from algorithmic transparency to bans on facial recognition and regulations in hiring practices.
Rep. Jay Obernolte of California, one of the bill’s sponsors and an AI expert, emphasized this point: “Startups and researchers can’t afford to navigate dozens of contradictory standards. We need coherence.” Major tech corporations, including Microsoft and Google, have echoed this sentiment, advocating for a clear federal leadership on AI governance. A report from the Chamber of Commerce cautioned that inconsistent state rules could lead to compliance costs exceeding $150 billion over the next decade.
Opposition Voices
Civil liberties organizations caution that this bill represents a dangerous concentration of authority, especially as the local impacts of AI are becoming increasingly apparent. For instance, Illinois’s Biometric Information Privacy Act (BIPA) has led to over 2,000 lawsuits against companies like Clearview AI and Facebook, with victims securing significant settlements for unauthorized use of their biometric data. Critics argue that if the federal bill passes, local laws like BIPA would become stagnant, unable to adapt to new technological advancements.
Jeramie Scott, Senior Counsel at the Electronic Privacy Information Center (EPIC), remarked, “This is a corporate power grab. State legislatures are where real accountability is happening.” Legislative efforts in states like California have sought to enhance oversight on AI applications, such as the Automated Decision Systems Accountability Act (AB 331), which mandates algorithm audits to address bias. Advocacy groups view this as a crucial step towards responsible AI governance, but the federal proposal would prohibit similar initiatives in other states.
Democratic Division
While traditionally Democratic lawmakers have championed state autonomy on tech issues, this bill has garnered support from some centrist Democrats who are concerned about maintaining the U.S.'s competitive edge in the global AI landscape. Rep. Suzan DelBene, chair of the New Democrat Coalition, has indicated she is “open to federal leadership” provided it encompasses clear protective measures.
However, prominent voices within the party are pushing back. Sen. Ron Wyden of Oregon has described the proposal as “deeply irresponsible,” arguing that it allows major tech firms to dictate regulations to the detriment of local accountability.
International Comparison
While the U.S. debates its regulatory future, other nations are advancing their AI frameworks. The European Union adopted the Artificial Intelligence Act in March 2025, establishing tiered regulations based on risk, imposing bans on specific AI applications like social scoring, and enforcing rigorous transparency requirements. In comparison, the U.S. has yet to enact even foundational AI regulations, despite extensive discussions over the years.
China is also moving forward, having implemented regulations that require watermarking of AI-generated content and real-name registration for developers of generative AI models. Dr. Rashida Khan, a professor of law and emerging technology at Georgetown University, warned, “If this bill is enacted, the U.S. could become the only major nation that prohibits local regulations on AI while simultaneously lacking a comprehensive national standard.”
What’s at Stake?
Critics emphasize the potential for increased regulatory delays that may leave vulnerable communities unprotected from AI’s problematic applications. Instances of predictive policing and facial recognition technologies have already shown biased outcomes, as evidenced by the MIT Media Lab’s “Gender Shades” project, which revealed significant disparities in error rates for different demographic groups.
According to a 2024 report from the National Institute of Standards and Technology (NIST), AI resume screening tools have been found to discriminate against employment candidates based on factors like name, gender, and age. By hindering state responses to such findings, this legislation could leave affected communities without recourse for years to come.