In a bold move that could reshape the artificial intelligence hardware landscape, chip startup MatX has secured $500 million in Series B funding to challenge Nvidia’s dominance in AI processors. Founded by former Google hardware engineers Reiner Pope and Mike Gunter, the company aims to produce chips that are 10 times more efficient than Nvidia’s GPUs for training large language models, with plans to begin shipping through TSMC in 2027.
The funding round, led by Jane Street and Situational Awareness – an investment fund formed by former OpenAI researcher Leopold Aschenbrenner – includes notable backers like Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick and John Collison. While MatX hasn’t disclosed its latest valuation, competitor Etched recently raised a similar $500 million round at a $5 billion valuation, suggesting MatX could be approaching comparable worth.
The Geopolitical Context of AI Hardware
MatX’s emergence comes at a critical juncture in the global AI race, where hardware innovation intersects with national security concerns. Recent allegations from Anthropic reveal that Chinese AI labs – including DeepSeek, Moonshot AI, and MiniMax – have conducted “distillation attacks” on U.S. AI models, using over 24,000 fake accounts to generate more than 16 million exchanges to copy capabilities like agentic reasoning and coding.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank, told TechCrunch: “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact. This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
These distillation attacks, which involve training smaller models on outputs from advanced systems, raise questions about whether U.S. export controls on advanced chips like Nvidia’s Blackwell series are sufficient. Anthropic argues that such attacks require access to advanced chips and reinforce the need for stricter export restrictions.
The Military-AI Complex
Meanwhile, another frontier AI company faces pressure from a different direction. The Pentagon has given Anthropic until Friday evening to grant unrestricted military access to its AI model or face being designated a ‘supply chain risk’ – potentially having the Defense Production Act invoked to force compliance.
Defense Secretary Pete Hegseth delivered this ultimatum to CEO Dario Amodei, citing the military’s need for AI capabilities in national defense. Anthropic, which has a $200 million contract with the Department of Defense and whose Claude model was used in the capture of Venezuelan leader Nicol�s Maduro in January, refuses to allow its technology for mass surveillance or autonomous weapons.
Dean Ball, senior fellow at the Foundation for American Innovation and former senior policy advisor on AI in Trump’s White House, warned: “It would basically be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business.’ Any reasonable, responsible investor or corporate manager is going to look at this and think the U.S. is no longer a stable place to do business.”
Interpretability as Competitive Advantage
As companies like MatX push hardware boundaries, others focus on making AI more transparent. Guide Labs recently open-sourced Steerling-8B, an 8 billion parameter large language model with a novel architecture designed for interpretability. The model allows every token produced to be traced back to its origins in the training data, addressing challenges in understanding model behavior and controlling outputs.
Julius Adebayo, CEO of Guide Labs, explained: “The kind of interpretability people do is…neuroscience on a model, and we flip that. What we do is actually engineer the model from the ground up so that you don’t need to do neuroscience. This model demonstrates that training interpretable models is no longer a sort of science; it’s now an engineering problem.”
What This Means for Businesses
For enterprises investing in AI, these developments signal several important trends:
- Hardware diversification: MatX’s challenge to Nvidia could eventually lower costs and increase options for companies running large-scale AI training, though the 2027 timeline means Nvidia’s dominance will continue for several more years.
- Supply chain considerations: The geopolitical tensions highlighted by Anthropic’s allegations suggest companies may need to consider where their AI models and hardware originate, particularly for sensitive applications.
- Regulatory awareness: The Pentagon-Anthropic dispute illustrates how government pressure could shape what AI capabilities are available and under what conditions.
- Transparency demands: As models like Steerling-8B demonstrate, interpretability is becoming a competitive feature, especially for regulated industries like finance and healthcare.
The $500 million investment in MatX represents more than just another startup funding round – it’s a bet on a future where AI hardware innovation must navigate complex geopolitical, military, and ethical considerations. As companies like MatX, Anthropic, and Guide Labs chart different paths forward, the broader AI ecosystem faces fundamental questions about competition, security, and control that will shape the technology’s development for years to come.

