In short: The Trump administration is waging a multi-pronged campaign to prevent states from regulating AI, using a DOJ judicial task force, a Commerce Department assessment of “burdensome” state laws, and a legislative framework urging Congress to preempt state-level regulation with a “minimally burdensome national standard.” But states have accelerated in the opposite direction — 1,208 AI bills introduced in 2025, 145 enacted — and Congress has twice rejected lifting the AI moratorium from the One Big Beautiful Act, including by a 99-1 Senate vote.
Doug Fiefia is a first-time Republican representative from Herriman, Utah, and a former Google salesperson who led the team working on the company’s initial AI model implementation. Earlier this year, he introduced House Bill 286, the AI Transparency Act, which would require cross-border AI companies to publish their safety and child protection plans and include whistleblower protections for employees who raise safety concerns. It passed unanimously in a House committee. Then the White House killed him.
On February 12, the White House Office of Intergovernmental Relations sent a letter to Senate Majority Leader Kirk Cullimore Jr. of Utah stating: “We strongly oppose Utah HB 286 and see it as an irreparable bill that runs counter to the Administration’s AI Agenda.” Officials had several conversations with Fiefia over the previous two weeks urging him not to advance the bill. They did not offer specific changes that would make it passable. The bill died in the Senate.
Fiefia’s reply was noted. He said it’s especially important to stand up for states’ rights and demonstrate that principle is nonpartisan when a Republican colleague is in office. His bill only “targeted”frontier developers,” the companies used at least 10^26 floating-point operations to train the model and paid a $1 million fine. That was modest by AI law standards. The White House treated it as existential.
Federal architecture
The Trump administration’s campaign against state AI regulation has three components, each building on the last.
The first is Executive Order 14365, “Providing a National Policy Framework for Artificial Intelligence,” signed on December 11, 2025. It created the Anti-AI Task Force at the Justice Department, effective January 10, 2026, to challenge state AI laws in federal court on the grounds that they unconstitutionally burden interstate commerce or federal preemption. It directed the Commerce Secretary to publish by March 11 a comprehensive assessment of state AI laws defining what is “charged” and directed the FTC to issue a policy statement on when state laws are subject to the FTC Act. He conditioned access to federal broadband funding on states’ willingness not to pass what the administration considers onerous AI laws. The executive order stripped child safety protections, data center zoning authority, and state government procurement from preemption.
The second was the Commerce Department’s assessment, published by the March deadline, which singled out laws in Colorado, California and New York for special scrutiny. The assessment goes to a DOJ task force that is expected to begin issuing federal subpoenas by the summer of 2026. The cases are predicted to take two to three years to resolve.
The third was the National Policy Framework for AI, published on March 20, and contains legislative recommendations organized around seven pillars: child protection, AI infrastructure, intellectual property, censorship and freedom of speech, innovation, workforce training, and prioritizing state AI laws. The framework states that “Congress should pre-empt state AI laws that impose undue burdens to ensure a minimum-burden national standard consistent with these recommendations, not fifty inconsistent ones.“The administration’s position on copyright is that training AI models on copyrighted material ‘does not violate copyright laws.'” On content moderation, it calls on Congress to prevent the federal government from “prohibiting, coercing, or forcing technology providers, including AI providers, to modify content based on partisan or ideological agendas.”
David Sachs, who served as the AI and cryptocurrency czar before moving into a presidential advisory role in late March, laid out the logic clearly: “You have 50 different states regulating it in 50 different ways, and that creates a patchwork of regulation that is difficult for us innovators..” Of Colorado’s algorithmic discrimination rules, he said they raised “The First Amendment is a very serious concern.” On blue states more broadly: “We don’t like to see blue states trying to inject their awakened ideology into their AI models, and we really want to try and stop that.”
What have the states done?
States have not been idle as Washington debates whether they should be allowed to act. In 2023, fewer than 200 AI bills were introduced in state legislatures. In 2024, that number rose to 635 in 45 states, with 99 in effect. In 2025, 1,208 AI-related bills were introduced in all 50 states, with every state introducing at least one in the first year, and 145 laws enacted. In the first two months of 2026 alone, 78 chatbot-specific security bills were introduced in 27 states.
California’s AI Border Transparency Act went into effect on January 1, 2026. The Texas Responsible Governance of Artificial Intelligence Act went into effect that day. Colorado’s artificial intelligence law banning algorithmic discrimination was pushed back to June 30, 2026. The scope of the legislation reflects a bipartisan consensus at the state level that AI regulation cannot wait for Congress without repeated action.
Utah Gov. Spencer Cox, a Republican, argued that states should retain the power to control artificial intelligence. “Let us use this technology to benefit humanity and regulate it so that it does not destroy humanity,” he said. “I don’t think this is a contradiction.“He warned that if AI companies “start selling sexy chatbots to guys in my state now i have a problem with that,” and “ announcedpro people“A $10 million AI initiative for workforce readiness.
Congress can’t agree
The administration’s framework requires acts of Congress to be legally binding. The executive order itself does not preempt, supersede, or invalidate any state AI law. Until courts rule on specific challenges, regulated parties must continue to comply with state regulations.
The most comprehensive federal AI bill is Sen. Marsha Blackburn’s TRUMP AMERICA AI Act, a 291-page discussion draft released March 18. It would impose a duty of care for high-risk AI systems, require developers to publish records of training and use of resulting data, repeal Section 230 of the Communications Decency Act, and create an AI liability framework that would allow the Attorney General, state attorneys general, and private actors to sue AI developers. This would preempt state laws dealing with cross-border artificial intelligence catastrophic risk management and largely state digital replica laws. It remains a discussion draft and has not been officially introduced.
The One Big Beautiful Act originally included a provision for a ten-year moratorium on state AI regulation, later reduced to five years on federal broadband funding. The Senate voted 99 to 1 to repeal the AI preference provision, with only Sen. Thom Tillis of North Carolina voting to keep it. The bill went into effect on July 4 without any restrictions on state legislation related to artificial intelligence. Congress’s message was unequivocal: protective question is not resolved.
The money behind the fight
The lobbying infrastructure on both sides has expanded commensurately. The Future Leader super PAC, launched in August 2025 by Andreessen Horowitz and OpenAI president Greg Brockman, raised $125 million in 2025 and had $70 million by the end of the year. He supports candidates who favor uniform federal regulations over AI-friendly policies and state-by-state approaches.
On the other hand, Anthropic donated $20 million in February 2026 to Public First Action, a bipartisan group that plans to support 30 to 50 candidates from both parties who support AI defenders. Public First’s broader network of super PACs has pledged $50 million to pro-regulation candidates. The tech industry has reportedly spent more than $1 billion on a collective effort to prevent states from controlling AI.
A bipartisan coalition of 36 state attorneys general sent a letter to Congress opposing AI preemption, arguing that risks including fraud, deep fraud and harmful interactions, especially for children and the elderly, necessitate state protections. Colorado’s attorney general has vowed to challenge the executive order in court.
Precedent that matters
Hours after Biden took office on January 20, 2025, the administration rescinded Executive Order 14110, renaming it “unnecessary burden.“That order required developers to conduct pre-release security assessments and share findings with the government. Its replacement, signed three days later,” read the headline.Removing obstacles to American leadership in artificial intelligence.” The trajectory of trying to prevent states from creating their own from repealing federal security requirements has logic: if the federal government doesn’t control AI and doesn’t allow states to regulate AI, then AI won’t be regulated.
The Contrasted with Europe it is instructive. The EU’s AI Act came into full force in January 2026, creating a single regulatory framework across 27 member states. The US approach is the opposite: no mandatory federal standard and an active campaign to prevent states from filling the gap. The result is this AI management In America, it is determined not by legislation or regulation, but by litigation, executive orders, and the political leverage of companies that benefit most from the absence of regulations.
Utah Republican Doug Fiefia, who watched the transparency bill die after his letter to the White House, is now running for state senate. His opponent, the incumbent who helped kill the bill, reportedly called it “It would have put Utah out of the artificial innovation business.Fiefia co-chairs the Future Caucus’ artificial intelligence task force along with Vermont Democrat Monique Priestley, who has 24 years in the tech industry.They represent a generation of state legislators who have worked in technology, understand what AI can do, and believe that insight should inform regulation rather than prevent it. regulating vacuum they are trying to fill it, which will last as long as it becomes permanent.






