When did a down-ballot House race become a referendum on who writes the rules for advanced algorithms? In New York�s 12th district, a super PAC backed by marquee tech founders and investors has made Assembly Member Alex Bores its first target – over his push for disclosures and safety planning at large model labs.
The money, the bill, and the backlash
Leading the Future, a super PAC with support from Palantir co-founder Joe Lonsdale, OpenAI president Greg Brockman, Andreessen Horowitz, and AI search startup Perplexity, has raised $125 million to influence state races. The group has committed at least $10 million to defeat Bores, a former technologist who says he left Palantir in 2019 over its work with ICE.
Bores sponsored the RAISE Act, signed into New York law in December. It doesn�t cap training runs or ban models; instead, it requires AI developers with more than $500 million in revenue to publish safety plans, follow them, and report any catastrophic incidents. That�s disclosure and traceability – closer to Sarbanes-Oxley than a shutdown order.
Why target Bores? �I actually deeply understand the technology and I can�t be dismissed,� he told TechCrunch. He argues the PAC seeks to chill state-level action by making an example of a candidate who favors modest transparency. The PAC, for its part, has championed a light-touch posture and prefers any rules to be set federally.
Patchwork vs. preemption: the states fight on
The onslaught isn�t just about one race. In the absence of a national framework, states from California to Colorado have advanced AI bills on safety, synthetic media labeling, and model disclosures. The White House has signaled a different direction: in December, President Trump signed an executive order directing agencies to challenge �onerous� state AI laws, a move aligned with industry calls for federal preemption.
Meta has poured $65 million into two state-focused super PACs, while AI companies and executives sent at least $83 million to federal campaigns in 2025, according to TechCrunch. The message: avoid a 50-state mosaic of rules that could raise compliance costs and product friction for model providers and enterprise buyers alike.
Defense deals raise the stakes – and the temperature
The Bores fight lands amid an even more combustive flashpoint: Pentagon access to cutting-edge models. Last week, OpenAI reached a deal to run its systems on a classified network with restrictions – no domestic mass surveillance and human responsibility for any use of force – after rival Anthropic walked away over similar demands. �Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force,� Sam Altman said, adding the agreement reflects those principles.
The market noticed. After the DoD deal, ChatGPT mobile uninstalls surged 295% day over day on February 28, while 1-star reviews jumped 775%. Anthropic�s Claude saw downloads jump 37% on February 27 and 51% on February 28, briefly hitting No. 1 on the U.S. App Store. Consumer sentiment – along with employee activism – has become another constraint on how AI firms navigate government partnerships.
A cautionary tale from another regulatory arena
Executives warn that fragmented or slow-moving oversight can choke innovation without improving safety. The chemical sector�s ongoing troubles with the TSCA overhaul are instructive: companies say shifting standards and backlogs have delayed product approvals beyond the law�s 90-day timeline, prompting bipartisan calls in Congress to modernize the process. Even the EPA�s own chemical office cites surging workloads from manufacturing, including data center expansion.
The parallel isn�t perfect, but the lesson is clear for AI buyers and vendors: clarity beats chaos. A coherent rulebook – transparent disclosures, incident reporting, and audit trails – can reduce vendor risk and procurement friction. A patchwork or perpetual re-interpretation of standards can stall deployments and raise costs, especially for regulated industries adopting foundation models and copilots at scale.
What it means for business
- Expect compliance baselines: If RAISE-style transparency proliferates, large model providers will standardize public safety plans and catastrophic incident reporting – making enterprise due diligence easier.
- Watch federal preemption: A national AI statute could streamline compliance but narrow states� ability to set stronger requirements for sectors like healthcare, finance, and energy.
- Vendor risk is political risk: As the OpenAI�Anthropic split shows, government deals can trigger swift consumer and workforce reactions, influencing brand risk and platform choices.
The bigger fight is about who gets to set the floor
Bores has released a national AI governance blueprint with 43 policy ideas and introduced bills on training data disclosures and metadata to trace synthetic content. He casts his race as a proxy battle over whether companies or elected officials set the baseline.
There�s a credible case on the other side: a single federal standard could prevent a compliance thicket that slows useful deployments in everything from code assistants to drug discovery. But if that standard arrives only after industry money muzzles state experiments, expect more whiplash – and more costly surprises for everyone building on top of large models.

