Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Kramer’s Lightning Round: “No” to SentinelOne

December 19, 2025

Palo Alto Networks announces multi-billion dollar Google Cloud deal

December 19, 2025

Premier League predictions and best bet: Liverpool shut out the ‘relegation level’ Spurs attack and win the treble on the weekend of 13/1 | Soccer News

December 19, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » AI regulatory competition sparks federal vs. state showdown
AI

AI regulatory competition sparks federal vs. state showdown

Editor-In-ChiefBy Editor-In-ChiefNovember 28, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


For the first time, Washington is close to deciding how to regulate artificial intelligence. And the battle brewing isn’t about technology, it’s about who will regulate it.

In the absence of meaningful federal AI standards focused on consumer safety, states have introduced dozens of bills to protect their populations from AI-related harm, including California’s AI Safety Bill SB-53 and Texas’ Responsible AI Governance Act, which prohibits the intentional misuse of AI systems.

Silicon Valley’s big tech giants and hot startups say the laws create an unworkable patchwork that threatens innovation.

“This is going to slow us down in competition with China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch.

The industry and some transplants in the White House are pushing for national standards or nothing at all. On the front lines of that all-or-nothing battle, new efforts are emerging to prohibit states from enacting their own AI legislation.

House members are reportedly trying to use the National Defense Authorization Act (NDAA) to block state AI laws. At the same time, a leaked draft White House executive order shows strong support for preempting national efforts to regulate AI.

Broad preemptive measures that would strip states of their right to regulate AI are unpopular in Congress, which overwhelmingly voted against a similar moratorium earlier this year. Lawmakers argue that without federal standards in place, state blocks would put consumers at risk and allow tech companies to operate freely without oversight.

tech crunch event

san francisco
|
October 13-15, 2026

To create that national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a series of federal AI bills covering a wide range of consumer protections, including fraud, health care, transparency, child safety, and catastrophic risk. Such a huge bill is likely to take months, if not years, to pass, highlighting why the current rush to limit state power is one of the most contentious battles in AI policy.

Fronts: NDAA and EO

US President Donald Trump will display an executive order on artificial intelligence that he signed during his speech. "Win the AI ​​race" AI Summit at Andrew W.
President Trump displays the executive order on AI signed on July 23, 2025 (Photo by ANDREW CABALLERO-REYNOLDS/AFP) Image credit:ANDREW CABALLERO-REYNOLDS/AFP/Getty Images

Efforts to prevent states from regulating AI have intensified in recent weeks.

The House is considering adding language to the NDAA that would prevent states from regulating AI, Majority Leader Steve Scalise (R-Louisiana) told Punchbowl News. Politico reported that Congress is working to finalize a deal on a defense bill by Thanksgiving. People familiar with the matter told TechCrunch that negotiations are focused on narrowing the scope for potentially preserving state authority in areas such as child safety and transparency.

Meanwhile, a leaked White House draft EO reveals the administration’s own potential preemptive strategy. The EO, which is reportedly on hold, would create an “AI Litigation Task Force” to challenge state AI laws in court, direct agencies to evaluate state laws they deem “adverse,” and encourage the Federal Communications Commission and Federal Trade Commission to seek national standards to override state rules.

Specifically, the EO would give David Sachs, President Trump’s AI and crypto czar and co-founder of VC firm Craft Ventures, co-leadership in creating a uniform legal framework. This will give Sacks direct influence over AI policy, replacing the typical role of the White House Office of Science and Technology Policy and its director, Michael Kratsios.

Sachs has publicly advocated for industry self-regulation, blocking state regulation and downplaying federal oversight to “maximize growth.”

patchwork discussion

Sachs’ position reflects many perspectives in the AI ​​industry. Several pro-AI super PACs have emerged in recent months, spending hundreds of millions of dollars in local and state elections against candidates who support AI regulation.

Leading the Future, backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity and Palantir co-founder Joe Lonsdale, has raised more than $100 million. This week, Leading the Future launched a $10 million campaign urging Congress to create a national AI policy that would override state laws.

“When you’re trying to drive innovation in technology, you don’t want to have a bunch of these laws being introduced by people who don’t necessarily have the technical expertise,” Vlast told TechCrunch.

He argued that a patchwork of national regulations “leads us behind in competition with China.”

Nathan Riemer, executive director of Build American AI, the PAC’s advocacy arm, acknowledged that the group supports preemption, which lacks AI-specific federal consumer protections in place. Riemer argued that existing laws, such as those dealing with fraud and product liability, are sufficient to address the harm caused by AI. While state laws often try to prevent problems before they occur, Riemer favors a more reactive approach that forces companies to act quickly and address problems in court later.

No preemption without representation

Alex Boas speaks at an event in Washington, DC, on November 17, 2025.
Alex Boas speaks at an event in Washington, DC, on November 17, 2025. Image credit: TechCrunch

Alex Boaz, a New York state representative running for Congress, is one of Leading the Future’s first targets. He sponsored the RAISE Act, which would require large AI laboratories to have safety plans in place to prevent serious harm.

“I believe in the power of AI, and that’s why it’s so important to have reasonable regulation,” Bores told TechCrunch. “Ultimately, the AI ​​that wins in the market will be the one that can be trusted, and in many cases the market will undervalue or provide low short-term incentives for investing in safety.”

Boas supports a national AI policy, but argues that countries can respond to emerging risks more quickly.

And it’s true that states move quickly.

As of November 2025, 38 states have passed more than 100 AI-related laws this year, primarily targeting deepfakes, transparency and disclosure, and government use of AI. (A recent study found that 69% of these laws impose no requirements on AI developers at all.)

Activity in Congress provides more evidence for the argument that activity is slower than in states. Hundreds of AI bills have been introduced, but few have passed. Since 2015, Rep. Liu has introduced 67 bills to the House Science Committee. Only one became law.

More than 200 members of Congress signed an open letter opposing the NDAA’s preemption, arguing that the state must “serve as a laboratory for democracy” and “retain flexibility to confront new digital challenges as they arise.” Nearly 40 state attorneys general also sent an open letter opposing state bans on AI regulation.

Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders, authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, argue that the patchwork claims are overstated.

They point out that AI companies are already complying with stricter EU regulations, and most industries have found ways to operate under various state laws. They say the real motive is to avoid responsibility.

What are the federal standards?

Mr. Liu has drafted a mega-bill of more than 200 pages that he hopes to introduce in December. It covers a variety of issues, including fraud penalties, deepfake protections, whistleblower protections, computing resources for academia, and testing and disclosure requirements for large language modeling companies.

The final provision requires AI labs to test their models and publish their results, which most currently do on a voluntary basis. Liu, who has not yet introduced the bill, said the bill would not direct federal agencies to directly review AI models. This differs from similar bills introduced by Sens. Josh Hawley (R-Mississippi) and Richard Blumenthal (D-CN), which would require a government-run evaluation program before deploying advanced AI systems.

Mr. Liu acknowledged that his bill is not very strict, but said it has a good chance of becoming law.

“My goal is to get something signed into law this term,” Lew said, noting that House Minority Leader Scalise has been openly hostile to AI regulation. “I’m not writing a bill that I would introduce if I were king. I’m trying to write a bill that can pass a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

OpenAI reportedly looking to raise $100 billion at $830 billion valuation

December 19, 2025

ChatGPT’s mobile app hits new milestone of $3 billion in consumer spending

December 18, 2025

Why are British politicians flocking to big American tech companies?

December 18, 2025
Add A Comment

Comments are closed.

News

How ICE Deports Refugees and Immigrants Despite Years of Good Conduct | Refugees

By Editor-In-ChiefDecember 19, 2025

José Trejo López believed immigration agents had separated him from his younger brother, Jozue, so…

Fact Check: President Trump Says America Has Secured $20 Trillion in Investment This Year | Donald Trump News

December 19, 2025

How much damage is US support for Israel causing Donald Trump? |Israel-Palestinian conflict News

December 19, 2025
Top Trending

OpenAI reportedly looking to raise $100 billion at $830 billion valuation

By Editor-In-ChiefDecember 19, 2025

OpenAI is in talks to raise up to $100 billion in a…

ChatGPT’s mobile app hits new milestone of $3 billion in consumer spending

By Editor-In-ChiefDecember 18, 2025

As of this week, ChatGPT has reached a new milestone of $3…

Why are British politicians flocking to big American tech companies?

By Editor-In-ChiefDecember 18, 2025

The war for AI talent shows no signs of slowing down, with…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.