As 2026 unfolds, a regulatory battle over artificial intelligence is heating up in the United States, pitting state-level safety laws against federal efforts to centralize control. While California and New York have implemented groundbreaking AI safety legislation, the Trump administration is pushing back with executive orders and litigation task forces aimed at creating a “minimally burdensome” national framework. This clash comes at a critical moment when global AI competition is intensifying, raising questions about whether the U.S. can maintain its technological edge while ensuring responsible development.
The State-Level Safety Framework
California’s SB-53 and New York’s RAISE Act represent the first significant AI safety laws in the U.S., creating a patchwork of regulations in the absence of federal legislation. SB-53, which took effect on January 1, requires AI developers with over $500 million in annual revenue to publicize risk mitigation plans and report safety incidents within 15 days, with fines up to $1 million for non-compliance. New York’s RAISE Act follows a similar approach but imposes stricter timelines and higher penalties, mandating incident reporting within 72 hours and fines up to $3 million for repeat violations.
Data protection lawyer Lily Li, founder of Metaverse Law, notes the political nature of the revenue threshold: “It’s interesting that there is this revenue threshold, especially since there has been the introduction of a lot of leaner AI models that can still engage in a lot of processing, but can be deployed by smaller companies.” She suggests this approach reflects concerns about growth-inhibiting costs rather than differences in potential harm based on company size.
Federal Pushback and Legal Challenges
The Trump administration is mounting significant resistance to state-level AI regulation. In December, President Trump signed an executive order arguing that “excessive State regulation thwarts” innovation and creates a problematic patchwork of laws. The order established an AI Litigation Task Force to challenge state laws deemed inconsistent with federal policy, renewing efforts that previously attempted to ban state AI regulations for 10 years.
However, Li remains skeptical about the task force’s impact: “The AI litigation task force will focus on laws that are unconstitutional under the dormant commerce clause and First Amendment, preempted by federal law, or otherwise unlawful. The 10th Amendment, however, explicitly reserves rights to the states if there’s no federal law, or if there’s no preemption of state laws by a federal law.”
The Global Context: China’s Strategic Moves
While U.S. regulators debate domestic policy, China is making strategic moves that could reshape the global AI landscape. According to a Financial Times analysis, China is positioning itself to win the long-term AI race through strengths in open-source models, algorithmic efficiency, and state-driven industrial strategy. Chinese researchers generated three times as many AI patents as the U.S., and Goldman Sachs projects China’s spare energy capacity will be over three times the world’s expected data center power demand by 2030.
Angela Huyue Zhang, a law professor at the University of Southern California, frames the competition differently: “The question is no longer whose models hit technical benchmarks, but who can build and sustain an ecosystem that embeds AI into everyday products and services.” This perspective suggests that regulatory approaches may need to consider not just safety but also ecosystem development.
Simultaneously, China is tightening control over foreign technology. Reuters reports that Chinese authorities are pressuring domestic firms to avoid security software from Western providers like Fortinet, Palo Alto Networks, and VMware, citing national security concerns. This move toward “digital sovereignty” could further separate China’s tech ecosystem from Western counterparts.
Economic Implications and Job Market Concerns
The regulatory debate occurs against a backdrop of economic uncertainty about AI’s impact. The IMF warns that global economic resilience is at risk if the AI boom falters, noting that growth is overly reliant on AI investment in the U.S. technology sector. Pierre-Olivier Gourinchas, IMF chief economist, cautions: “There is a risk of a correction, a market correction, if expectations about AI gains in productivity and profitability are not realised.”
Meanwhile, concerns about AI’s impact on employment are becoming more pronounced. While initial fears of widespread layoffs haven’t materialized since ChatGPT’s 2022 launch, economists expect more visible labor market reshaping in 2026. Molly Kinder, senior fellow at the Brookings Institution, expresses concern: “I am really worried about this. It is the clear, stated intention of employers and investors to deploy this and create efficiencies with, in many cases, an objective of cutting labour costs… we are underestimating in the medium to long term how much transformation could be ahead.”
The Safety Perspective: Progress or Paperwork?
From a safety standpoint, opinions differ on the effectiveness of current regulations. Gideon Futerman, special projects associate at the Center for AI Safety, doesn’t believe SB-53 will meaningfully impact safety research: “This won’t change the day-to-day much, largely because the EU AI Act already requires these disclosures. SB-53 doesn’t impose any new burden.”
However, Futerman acknowledges the symbolic importance: “SB-53’s level of regulation is nothing compared to the dangers, but it’s a worthy first step on transparency and the first enforcement around catastrophic risk in the US. This is where we should have been years ago.” He notes that neither law requires third-party testing, though New York’s RAISE Act mandates annual third-party audits.
Business Implications and Market Dynamics
For businesses, the regulatory landscape creates both challenges and opportunities. Li observes that governance has become a higher priority for AI companies driven by bottom-line considerations: “Enterprise customers are pushing liability onto developers, and investors are noting privacy, cybersecurity, and governance in their funding decisions.”
The revenue threshold in state laws creates an interesting dynamic, potentially giving smaller startups regulatory advantages while larger companies face additional compliance burdens. This comes as chipmakers navigate new tariffs on advanced semiconductors, with Nvidia expressing support for policies that “strike a thoughtful balance that is great for America.”
Looking Ahead: The Future of AI Governance
As the regulatory battle continues, several key questions emerge: Can state laws survive federal challenges? Will the U.S. develop coherent national AI policy before other nations gain competitive advantages? And how will businesses navigate this complex regulatory environment while maintaining innovation?
Futerman suggests future legislation should address remaining gaps: “That includes strengthening export controls and chip tracking, improving intelligence on frontier AI projects abroad, and coordinating with other nations on the military applications of AI to prevent unintended escalation.”
What’s clear is that the debate over AI regulation is no longer theoretical – it’s happening now, with real consequences for businesses, workers, and national competitiveness. As states and the federal government wrestle for control, the outcome will shape not just AI safety but America’s position in the global technology race.

