Anthropic’s logo is displayed on stage during the company’s Builders Summit on Monday, February 16, 2026 in Bangalore, India. Photographer: Samyukta Lakshmi/Bloomberg via Getty Images
Bloomberg | Bloomberg | Getty Images
Anthropic on Monday accused three Chinese AI companies of engaging in a coordinated campaign to extract information from its models, becoming the latest US tech company to level such allegations after OpenAI made similar accusations.
According to a statement from Anthropic, DeepSeek, Moonshot AI, and MiniMax (the three companies in question), they jointly launched a “distillation attack” campaign in which Claude was bombarded with prompts created specifically to train their own models.
Smaller AI models can mimic the performance of larger, pre-trained models by extracting knowledge from better-trained models through distillation. This technique is especially useful for small teams with few resources.
Despite Anthropic’s service restrictions preventing commercial access to Claude in China, the three companies allegedly used commercial proxy services to circumvent Anthropic’s restrictions, allowing them access to networks running tens of thousands of Claude accounts simultaneously.
“Once access is secured, the lab generates a large number of carefully crafted prompts designed to extract specific features from the model,” Anthropic said in a statement.
Claude’s responses to these prompts are created together for direct training of the Chinese model or to perform a process known as reinforcement learning, a data-intensive process in which an AI model learns to make decisions through trial and error without human guidance.
Anthropic estimated that the three Chinese companies were able to generate a total of more than 16 million interactions with Claude from approximately 24,000 fraudulently created accounts. Anthropic found that of the three companies, MiniMax drives the most traffic, with more than 13 million exchanges.
DeepSeek, Moonshot AI, and MiniMax have not yet responded to requests for comment from CNBC.
It’s not the first time
Anthropic joins a growing chorus of American companies expressing concern about distillation from Chinese AI companies.
Earlier this month, Sam Altman’s OpenAI submitted an open letter to the U.S. Congress, claiming it had observed activity “indicating continued attempts by DeepSeek to extract frontier models from OpenAI and other U.S. frontier laboratories, including new obfuscation techniques.”
The company has launched China’s first DeepSeek model since early last year, pointing to evidence of distillation by Chinese companies, but users say the model bears a striking resemblance to ChatGPT, the Financial Times reported in January 2025, citing OpenAI insiders.
However, distillation is not uncommon in the AI industry.
Anthropic acknowledged in a statement Monday that AI companies “routinely extract our models to create smaller, cheaper versions.”
But the company was concerned about the competitive advantage rivals could gain by using this approach, which would allow them to “acquire powerful capabilities from other laboratories in a fraction of the time and at a fraction of the cost of developing them on their own.”
In their respective statements, Anthropic and OpenAI characterized distillation by these Chinese companies as a national security threat.
Similar to OpenAI, which described DeepSeek’s practices as “adversarial distillation,” Anthropic expressed concern about the potential for “authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”
However, it is unclear to what extent these statements reflect genuine security concerns over the desire to maintain U.S. AI companies’ competitive advantage.
Some online users were quick to point out similarities between Anthropic’s claims and its use of distillation to train its own models.
Anthropic has long advocated “computing leadership as a national security priority” and has consistently advocated for tighter export controls for advanced AI chips to China, said Louis Ma of Tech Buzz China, a specialist consulting firm.
“Stories of illegal transfer of capabilities, intentional or not, strengthen the case for stricter chip limits,” Ma added.
On the same day as Anthropic’s statement, Reuters reported that the U.S. had found evidence that DeepSeek was training its AI models on Nvidia’s flagship Blackwell chips, apparently in disregard of export controls, according to unnamed senior officials.
Such reports add to the concerns of an administration that appears increasingly anxious about China’s rapid advances in its AI industry, particularly as China’s profits reportedly stem from its use of U.S.-developed systems.
Last Friday, the White House announced the creation of the Peace Corps. This is an initiative within the Peace Corps established to advance U.S. AI interests abroad and help partner countries deploy cutting-edge systems.
