A Dangerous Disconnect: Why Banning State AI Regulation Faces Overwhelming Public Opposition


The rapid rise of artificial intelligence (AI) has sparked intense debate about how to govern its risks and benefits, with public sentiment leaning heavily toward robust oversight. A May 2025 poll by Echelon Insights, commissioned by Common Sense Media, revealed that 81% of 1,022 registered U.S. voters oppose a proposed 10-year moratorium on state-level AI regulation, a provision embedded in the Republican-backed “One Big Beautiful Bill Act.” This bipartisan disapproval, echoed in posts on X and public discourse, underscores widespread unease about stripping states of their ability to address AI’s potential harms, such as deepfakes, algorithmic bias, and privacy violations. This exploration examines the poll’s findings, the context of the proposed ban, its implications for governance, and the broader public sentiment driving this opposition, maintaining a passive tone and targeting approximately 900 words.

The Echelon Insights Poll: A Clear Public Stance

The Echelon Insights poll, conducted in mid-May 2025, surveyed a diverse sample of 1,022 registered voters and found overwhelming resistance to preempting state AI regulations. Specifically, 73% of respondents favored both state and federal governments regulating AI, while only 19% supported exclusive federal oversight. The proposed moratorium, which would bar states from enforcing laws on AI models, systems, or automated decision-making for a decade, was rejected by 81% of voters across political affiliations. Posts on X, such as those from @americans4ri and @michhuan, amplified these findings, labeling the ban as “dangerous and unpopular” and emphasizing the public’s preference for state-level protections.

This opposition aligns with earlier surveys. A 2023 YouGov poll found that 73% of Americans, including majorities of Democrats (79%) and Republicans (73%), believe AI should be regulated by the government, with only 8% opposing any regulation. A 2023 Reuters/Ipsos poll further revealed that 61% of Americans see AI as a risk to humanity, reflecting deep concerns about its unchecked development. The 2025 poll’s results suggest that these anxieties have intensified, particularly as states have taken proactive steps to address AI’s societal impacts while federal efforts lag.

The Proposed Moratorium: Context and Controversy

The “One Big Beautiful Bill Act,” a budget reconciliation package advanced by House Republicans in May 2025, includes a clause under “Section 43201: Artificial Intelligence and Information Technology Modernization Initiative” that prohibits states from enforcing AI-related laws for 10 years. Introduced by Rep. Brett Guthrie (R-KY), chairman of the House Energy and Commerce Committee, the provision aims to prevent a “patchwork” of state regulations, which proponents like the Chamber of Commerce argue could hinder U.S. tech companies’ global competitiveness. The bill passed the House narrowly (215-214) but faces significant challenges in the Senate due to the Byrd Rule, which may deem the moratorium “extraneous” to budget matters.

Supporters of the moratorium, including some tech industry leaders, draw parallels to the 1998 internet tax moratorium, which they claim spurred e-commerce growth by reducing regulatory fragmentation. However, critics argue this comparison overlooks AI’s unique risks, such as discriminatory algorithms and deepfakes, which require immediate safeguards. States have been at the forefront of addressing these issues, with half of U.S. states enacting laws to regulate AI deepfakes in political campaigns, according to Public Citizen. California, for instance, mandates transparency in AI use for healthcare and hiring, protections that would be nullified under the moratorium.

The proposal has drawn sharp criticism. California state Sen. Scott Wiener called it “truly gross,” arguing that Congress’s failure to pass meaningful AI legislation makes state laws essential. A bipartisan group of state attorneys general, including South Carolina’s Alan Wilson, opposed the bill, emphasizing states’ roles in protecting citizens from AI’s “real dangers.” The Center for Democracy and Technology warned that the moratorium would “tie the hands” of state officials, undermining existing consumer protections. Over 140 organizations, as noted in a May 20, 2025, X post by @kortizart, have called for the provision’s rejection, reflecting broad resistance.

Why States Matter in AI Governance

States have emerged as critical players in AI regulation, filling a void left by federal inaction. Since ChatGPT’s 2022 release, Congress has introduced numerous AI-related bills, but only a bipartisan measure targeting nonconsensual AI-generated “revenge porn” has neared enactment. Meanwhile, states have proposed nearly 600 AI bills in 2025, addressing issues like algorithmic discrimination, youth safety, and data privacy. California’s 2024 laws, for example, require AI developers to disclose training data and mandate transparency in healthcare decisions, while other states have focused on banning deepfakes in elections. These efforts reflect a localized, responsive approach to AI’s immediate risks.

The Future of Privacy Forum’s 2024 report highlighted that state laws often focus on “consequential decisions” in areas like hiring, healthcare, and housing, mandating transparency and consumer rights to opt out of automated systems. The proposed moratorium would halt these protections, leaving consumers vulnerable until federal legislation emerges—a prospect deemed unlikely given Congress’s track record. The California Privacy Protection Agency warned that the ban could strip millions of existing rights, such as those under the 2020 California Consumer Privacy Act.

Public Sentiment and Global Parallels

The public’s opposition to the moratorium reflects broader anxieties about AI’s societal impact, from job displacement to misinformation. A 2018 survey by the Center for the Governance of AI found that 84% of Americans believe AI should be carefully managed, a view reinforced by the 2025 poll. Globally, similar sentiments prevail. A 2025 YouGov poll in Britain showed 87% of respondents supporting laws requiring AI safety proofs, with 60% favoring bans on “smarter-than-human” AI, mirroring U.S. concerns about unchecked development. The European Union’s AI Act, enacted in 2024, bans high-risk AI uses like real-time facial recognition in public spaces, setting a precedent for comprehensive regulation that the U.S. lacks at the federal level.

The Path Forward

The overwhelming unpopularity of banning state AI regulation, as captured in the Echelon Insights poll, underscores a public demand for governance that balances innovation with accountability. States have proven agile in addressing AI’s risks, from deepfake bans to transparency mandates, while federal efforts remain stalled. The proposed moratorium, facing Senate hurdles and widespread opposition, risks disconnecting policy from public will. As AI’s influence grows, the call for state-level protections reflects a desire for responsive governance that safeguards citizens against immediate harms while fostering responsible innovation. The debate over this provision will likely shape the future of AI governance, highlighting the tension between federal uniformity and state sovereignty in an era of rapid technological change.

Comments