In a dramatic escalation of the global AI arms race, Anthropic has publicly accused three prominent Chinese AI companies – DeepSeek, Moonshot AI, and MiniMax – of orchestrating what appears to be the largest documented case of AI model theft to date. According to Anthropic’s investigation, these labs created over 24,000 fake accounts to engage in more than 16 million exchanges with Claude, specifically targeting its most advanced capabilities in agentic reasoning, tool use, and coding. This revelation comes at a critical moment when U.S. policymakers are debating whether to tighten or loosen export controls on advanced AI chips to China.
The Distillation Dilemma: When Innovation Becomes Imitation
At the heart of this controversy lies a technique called “distillation,” a legitimate training method where AI labs create smaller, more efficient versions of their own models. However, when competitors use this method to essentially copy another lab’s work, it crosses into murky ethical territory. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator, told TechCrunch: “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact.”
The scale of these operations is staggering. Anthropic tracked more than 150,000 exchanges from DeepSeek aimed at improving foundational logic and alignment, particularly around censorship-safe alternatives to policy-sensitive queries. Moonshot AI conducted over 3.4 million exchanges targeting agentic reasoning, tool use, coding, and computer vision. Most alarmingly, MiniMax redirected nearly half its traffic to siphon capabilities from Claude’s latest model immediately upon its launch, generating 13 million exchanges focused on agentic coding and tool orchestration.
The Chip Connection: Hardware as Strategic Leverage
Anthropic’s accusations directly intersect with ongoing debates about AI chip exports. The company argues that “distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.” This comes just one month after the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips like the H200 to China – a decision critics argue could accelerate China’s AI capabilities at a critical moment.
But here’s where the story gets more complex. While U.S. labs focus on frontier dominance, Chinese companies are pursuing a different strategy. As Ritwik Gupta, an AI researcher at UC Berkeley, notes: “Chinese labs are getting better at building models that are useful for making applications. They largely view AI as a tool for building products, in contrast with the US labs, which view it as a race for ‘frontier’ dominance first, product second.” This practical approach has led to innovations like ByteDance’s Seedance 2.0 video-generating model and Moonshot’s open-sourced Kimi 2.5 coding system.
The Investment Paradox: Scarcity Over Substance?
Financial markets tell a revealing story about China’s AI landscape. While established tech giants like Alibaba and Tencent trade at depressed valuations, AI challengers like Zhipu and Minimax Group have seen their shares more than quadruple this year. According to Financial Times analysis, this reflects “scarcity value” – investors are flocking to pure-play AI companies despite their lack of profitability, seeing them as rare vehicles for betting on groundbreaking technologies.
This investment frenzy occurs alongside troubling revelations about AI’s fundamental limitations. Recent studies show that large language models from companies including Anthropic can memorize and generate near-verbatim copies of entire copyrighted novels. As Yves-Alexandre de Montjoye, professor at Imperial College London, warns: “There’s growing evidence that memorisation is a bigger thing than previously believed.” This challenges AI companies’ fair use defenses and raises questions about whether models built through illicit distillation might inherit – or lose – critical safeguards.
The Security Stakes: When Safeguards Disappear
Anthropic’s concerns extend beyond commercial competition to national security. The company warns that “models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.” This risk multiplies if these models are open-sourced, potentially enabling authoritarian governments to deploy AI for offensive cyber operations, disinformation campaigns, and mass surveillance.
Ironically, while Anthropic faces accusations about its technology being stolen, it’s simultaneously developing tools to enhance cybersecurity. The company recently launched Claude Code Security, an AI tool that analyzes code contextually rather than through rule-based systems. This announcement caused immediate market disruption, with cybersecurity stocks like CrowdStrike dropping 8% and Cloudflare falling 8.1%. As Dennis Dick, Head Trader at Triple D Trading, observed: “This kind of market is frightening for investors because prices relentlessly go down as soon as even a hint of disruption appears.”
The Military Dimension: AI’s Dual-Use Dilemma
Adding another layer of complexity, Anthropic CEO Dario Amodei has been summoned to the Pentagon to discuss military use of Claude AI. The meeting comes after the company reportedly refused to allow its technology to be used for mass surveillance of Americans and development of autonomous weapons. With a $200 million Department of Defense contract at stake, and reports that Claude was used during the January 3 special operations raid that captured Venezuelan president Nicol�s Maduro, Anthropic finds itself navigating treacherous waters between commercial interests, ethical principles, and national security requirements.
A Pattern of Industrial-Scale Theft
The Financial Times reveals that these are not isolated incidents but part of a systematic pattern. Anthropic describes the operations as “industrial-scale distillation attacks” that allow Chinese labs to train smaller models on outputs from advanced systems without requiring the same computing resources. This practice directly undermines U.S. export controls designed to maintain technological advantages.
Anthropic stated: “Distillation attacks undermine those controls by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” This warning comes as U.S. restrictions target Nvidia’s most advanced chips, including the Blackwell series, creating a high-stakes game of technological cat-and-mouse.
The Geopolitical Chessboard
What makes this situation particularly concerning is that it’s not the first time Chinese labs have been accused of such practices. OpenAI previously suspected DeepSeek of distilling its ChatGPT models, suggesting a recurring pattern rather than isolated incidents. The U.S. House Select Committee on Strategic Competition between the US and the Chinese Communist Party has been monitoring these developments closely, recognizing that AI superiority has become a critical component of national security.
Meanwhile, Chinese AI labs continue to release powerful new models praised for their efficiency in practical applications like AI agents and video generation. This creates a paradoxical situation where, despite export restrictions and alleged intellectual property theft, Chinese companies are demonstrating remarkable innovation in applied AI – raising questions about whether current U.S. strategies are effectively addressing the real challenges.
New Revelations: Distillation’s Darker Implications
Recent analysis reveals that distillation attacks aren’t just about copying capabilities – they could bypass critical safety measures built into original models. When models are trained on outputs rather than original data, they may lose the ethical guardrails and alignment techniques that prevent misuse. This creates a dangerous scenario where advanced AI capabilities could proliferate without the constraints that responsible developers implement.
Anthropic’s investigation found that the 24,000 fraudulent accounts and over 16 million exchanges weren’t random queries but targeted attempts to extract specific capabilities. This systematic approach suggests sophisticated understanding of Claude’s architecture and vulnerabilities. The Financial Times reports that such attacks allow foreign labs to “close the competitive advantage that export controls are designed to preserve through other means,” creating a technological end-run around hardware restrictions.
Market Reactions and Strategic Shifts
The investment landscape reveals deeper strategic shifts. While AI challengers like Zhipu and Minimax Group see their valuations soar, established giants like Alibaba and Tencent struggle despite their AI engagement. This “scarcity value” phenomenon suggests investors are betting on pure-play AI companies as rare vehicles for groundbreaking technology exposure, even when profitability remains distant.
This investment pattern coincides with Chinese labs’ practical innovation focus. During the recent Lunar New Year holiday, companies like ByteDance, Alibaba, and Moonshot released new AI models emphasizing practical applications – from video generation to coding systems. This contrasts sharply with U.S. labs’ frontier dominance approach, creating divergent development paths that could reshape global AI competition.
Security Implications Beyond Commercial Theft
The security implications extend far beyond intellectual property concerns. Models built through illicit distillation may lack the safety measures of their originals, potentially enabling malicious applications. This risk is particularly acute when considering military applications, where stripped-down AI could be deployed for offensive operations without ethical constraints.
Anthropic’s own security developments highlight this tension. While facing accusations of technology theft, the company launched Claude Code Security – a tool that found over 500 vulnerabilities in open-source projects. This announcement caused immediate market disruption, with cybersecurity stocks dropping significantly as investors recognized AI’s potential to transform security practices.
Emerging Threats: When AI Security Tools Become Targets
The irony deepens when considering that AI security tools themselves could become targets for similar attacks. Claude Code Security’s ability to find vulnerabilities in codebases – Anthropic claims it identified over 500 in open-source projects – demonstrates how AI can enhance cybersecurity. But what happens when these defensive capabilities are distilled and repurposed? Could attackers use similar techniques to find and exploit vulnerabilities more efficiently?
This creates a circular threat environment where the very tools designed to protect systems might inadvertently teach adversaries how to breach them. The market’s reaction – with cybersecurity stocks dropping 8% or more – suggests investors recognize this dual-edged nature of AI security innovation.
Strategic Implications for Global AI Development
As the AI industry grapples with these interconnected challenges, one thing becomes clear: the race for AI supremacy is no longer just about technological innovation. It’s about intellectual property protection, hardware access, market dynamics, security safeguards, and geopolitical positioning. The distillation attacks revealed by Anthropic may represent just the visible tip of a much larger iceberg – one that could reshape global AI development for years to come.
The question now facing policymakers, investors, and industry leaders is whether current approaches to AI governance and competition are adequate for this new reality. With Chinese labs demonstrating practical innovation despite restrictions, and U.S. companies facing complex ethical and strategic dilemmas, the global AI landscape appears headed for a period of intensified competition and potential conflict.
Updated 2026-02-23 21:48 EST: Added new section ‘A Pattern of Industrial-Scale Theft’ with details from Financial Times source about systematic distillation attacks undermining export controls, including direct quote from Anthropic. Added section ‘The Geopolitical Chessboard’ discussing recurring pattern of such attacks, OpenAI’s previous suspicions, and the paradoxical innovation by Chinese labs despite restrictions.
Updated 2026-02-23 21:50 EST: Extended article with additional context about distillation attacks bypassing safety measures, deeper analysis of investment patterns favoring AI challengers over established giants, and enhanced discussion of security implications beyond commercial theft. Added new sections on ‘New Revelations: Distillation’s Darker Implications’ and ‘Market Reactions and Strategic Shifts’ while maintaining all original content and increasing news value through comprehensive source integration.
Updated 2026-02-23 21:53 EST: Enhanced the article by adding a new section ‘Emerging Threats: When AI Security Tools Become Targets’ that explores the circular threat environment where defensive AI tools could be distilled and repurposed by attackers, and expanded the conclusion to address strategic implications for global AI development. Added specific details about Claude Code Security finding over 500 vulnerabilities and the market reaction to cybersecurity stocks dropping 8% or more, strengthening the analysis of AI’s dual-edged nature in security applications.

