In a dramatic escalation of tensions between Silicon Valley and Washington, newly revealed court documents show the Pentagon told AI company Anthropic that the two sides were “very close” to resolving their differences – just days after the government publicly declared the relationship over. This revelation comes as Anthropic fights back against what it calls an “unacceptable risk” designation that could cripple its government business, raising fundamental questions about how AI companies should engage with military applications.
The Contradiction at the Heart of the Dispute
According to sworn declarations filed late Friday in California federal court, Pentagon Under Secretary Emil Michael emailed Anthropic CEO Dario Amodei on March 4 – the day after the Defense Department finalized its supply-chain risk designation against the company – to say the two sides were “very close” on the very issues the government now cites as evidence of national security concerns. This email directly contradicts Michael’s subsequent public statements that there was “no chance” of renewed talks, creating what legal experts describe as a credibility problem for the government’s case.
Sarah Heck, Anthropic’s Head of Policy and a former National Security Council official, states in her declaration that the Pentagon’s central claim – that Anthropic demanded approval over military operations – “simply isn’t true.” She emphasizes that this concern never surfaced during months of negotiations, appearing only in court filings where Anthropic had no opportunity to respond. Meanwhile, Thiyagu Ramasamy, Anthropic’s Head of Public Sector, argues that the technical fears about the company disabling military AI mid-operation are impossible given how Claude models are deployed in secure government systems.
Broader Industry Implications
This isn’t just about one company. As reported by the Financial Times, the Pentagon’s move has “fractured the truce” between Silicon Valley and the Trump administration, with major tech companies including Microsoft, Apple, Meta, OpenAI, Amazon, and Google rallying behind Anthropic through legal briefs and lobbying. Dean Ball, a former Trump official, called it “by a profoundly wide margin the most damaging policy move I have ever seen,” while Alec Stapp of the Institute for Progress warned that “a lot of the tech industry is waking up and realizing we have to draw a line in the sand here.”
The stakes are enormous. Anthropic’s annualized revenue has grown from $9 billion in 2025 to $19 billion recently, and the company raised $30 billion from at least 40 investors in February. This dispute comes as OpenAI continues to expand its capabilities through strategic acquisitions like Astral, the open-source Python tool-maker, highlighting the intense competition in the AI development space. Meanwhile, Meta’s recent struggles with rogue AI agents exposing sensitive data underscore the genuine security challenges that both companies and governments must navigate.
Public Sentiment and Practical Realities
Beyond the boardrooms and courtrooms, public attitudes toward AI remain deeply divided. An Anthropic survey of 80,508 Claude users across 159 countries revealed that while 26.7% worry about AI unreliability and hallucinations, and 22.3% fear job losses, many see AI as a tool for personal development and life organization. These mixed feelings reflect the broader tension between AI’s potential benefits and its perceived risks – a tension now playing out in the Anthropic-Pentagon standoff.
What makes this case particularly significant is that it represents the first time the U.S. government has applied a supply-chain risk designation to an American company, putting Anthropic in the same category as Chinese or Russian groups. As Tim Hwang, general counsel at the Foundation for American Innovation, noted: “It is very hard to imagine [AI] technology scaling, as a business, as an industry, even as a scientific endeavor, if ultimately the power of a state can be used to ‘murder’ a company.”
The Path Forward
As Judge Rita Lin prepares to hear arguments this Tuesday, the business community is watching closely. The outcome could set precedents for how AI companies collaborate with government agencies, what constitutes acceptable risk in national security contexts, and whether companies can maintain ethical boundaries while serving public sector clients. With separate legal battles over content scraping also on Anthropic’s docket, this moment represents a critical inflection point for the entire AI industry.
The fundamental question isn’t whether AI will play a role in national security – that ship has sailed. The real issue is how to establish clear, transparent rules that protect both national interests and technological innovation. As this case demonstrates, the current approach of contradictory communications and sudden designations serves neither purpose well. For businesses considering government contracts, the message is clear: proceed with caution, document everything, and be prepared for the rules to change without warning.

