In a quiet congressional race in New York’s 12th district, a proxy war is unfolding that reveals how artificial intelligence companies are increasingly using political spending to shape their regulatory future. The battle between two AI-backed political action committees – one funded by Anthropic, the other by a coalition including OpenAI executives and venture capitalists – highlights the industry’s diverging visions for how AI should be governed. This isn’t just about one election; it’s about whether AI development will prioritize transparency and safety or move forward with fewer constraints.
The New York Showdown
Assembly member Alex Bores, who sponsored New York’s RAISE Act requiring AI developers to disclose safety protocols, found himself targeted by Leading the Future, a super PAC backed by $100 million from Andreessen Horowitz, OpenAI President Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale. The group has poured $1.1 million into ads attacking Bores’ congressional bid. But Bores responded with confidence, and now he has reinforcements: Public First Action, a PAC backed by a $20 million donation from Anthropic, is spending $450,000 to support his campaign.
What makes this conflict particularly revealing is how it mirrors broader industry tensions. Anthropic’s PAC promotes “transparency, safety standards, and public oversight” – principles that align with the company’s cautious approach to AI development. Meanwhile, Leading the Future represents a more aggressive, innovation-first philosophy. This political spending represents a new front in the AI regulation debate, moving from boardrooms and legislative hearings directly into election campaigns.
Beyond New York: A Global Regulatory Landscape
The New York battle occurs against a backdrop of intensifying global discussions about AI governance. At the recent India AI Impact Summit, world leaders emphasized the need for international cooperation. Indian Prime Minister Narendra Modi called for AI to become “a medium for inclusion and empowerment, particularly for the Global South,” while French President Emmanuel Macron urged changing the discussion from “let’s do more” to “let’s do better together.” UN chief Antonio Guterres warned that the future of AI should not be “decided by a handful of countries” or left to the “whims of a few billionaires.”
These international perspectives matter because AI regulation is increasingly becoming a geopolitical issue. As Sundar Pichai announced Google’s plans for an AI hub in Vishakhapatnam and Mukesh Ambani pledged $110 billion to India’s AI ecosystem, the competition for AI leadership extends beyond technology to include regulatory frameworks that could give nations competitive advantages.
The Safety vs. Innovation Tension
Anthropic’s political involvement reflects its broader corporate philosophy, which recently created friction with the Pentagon. According to WIRED, the Department of Defense is reconsidering a $200 million contract with Anthropic because the safety-conscious firm objects to participating in certain deadly military operations. This stance has led to discussions about designating Anthropic as a “supply chain risk” for defense contractors. Pentagon spokesperson Sean Parnell stated that “our nation requires that our partners be willing to help our warfighters win in any fight.”
This military controversy illustrates the practical implications of different AI safety approaches. While some companies prioritize rapid deployment and capability expansion, others like Anthropic establish clear ethical boundaries – even when it means turning down lucrative government contracts. The New York political spending represents the domestic political manifestation of this same philosophical divide.
Industry Rivalries Go Public
The tension between AI companies isn’t just philosophical – it’s personal and public. At the India summit, an awkward moment occurred when Prime Minister Modi asked speakers to join hands in solidarity. While most executives complied, OpenAI’s Sam Altman and Anthropic’s Dario Amodei noticeably held their hands apart, highlighting their intense rivalry. This tension stems from recent public disputes, including OpenAI’s plan to introduce ads to ChatGPT and Anthropic’s Super Bowl ads criticizing OpenAI’s approach.
Altman has called Anthropic “dishonest” and “authoritarian,” while both companies announced significant expansions in India during the summit. This public friction between industry leaders suggests that the battle over AI’s future isn’t just happening in legislative chambers or corporate boardrooms – it’s playing out in international forums, advertising campaigns, and now, political contributions.
The Business Implications
For businesses and professionals, these developments signal that AI regulation is entering a new, more politically charged phase. The traditional lobbying approach is being supplemented by direct political spending in key races. Companies must now consider not just how to comply with existing regulations, but how to influence which regulations get written in the first place.
The stakes are particularly high for AI safety. New York’s RAISE Act, which triggered the political battle, requires major AI developers to disclose safety protocols and report serious misuse of their systems. Similar legislation in other states or at the federal level could significantly impact how AI companies operate, potentially requiring more transparency about training data, safety testing, and risk mitigation strategies.
A New Era of AI Politics
As AI becomes more integrated into society, the political battles surrounding it are likely to intensify. The New York race represents just the beginning of what could become a nationwide pattern of AI companies backing candidates who align with their regulatory philosophies. This raises important questions about corporate influence in politics and whether the public interest in AI safety can be adequately represented in a system where tech companies can spend millions to support friendly candidates.
The outcome of these political battles will shape not just who gets elected, but what kind of AI future we build. Will we prioritize rapid innovation with minimal oversight, or will we establish guardrails that prioritize safety and transparency? The answer may depend as much on campaign spending as on technical debates.

