Anthropic accuses DeepSeek, Moonshot and MiniMax of distillation attacks on Claude


The Anthropic logo displayed on the stage during the company’s Builder Summit in Bengaluru, India, on Monday, Feb. 16, 2026. Photographer: Samyukta Lakshmi/Bloomberg via Getty Images

Bloomberg | Bloomberg | Getty Images

Anthropic on Monday accused three Chinese AI companies of coordinated campaigns to extract information from its model, making it the latest American tech firm to level such claims after OpenAI issued similar complaints.

According to a statement from Anthropic, DeepSeek, Moonshot AI and MiniMax — the three firms in question — engaged in concerted “distillation attack” campaigns, flooding Claude with large volumes of specially-crafted prompts to train proprietary models.

Through distillation, smaller AI models can mimic the performance of larger, pre-trained models by extracting knowledge from the better-trained model, a technique particularly useful for smaller teams with fewer resources.

Despite Anthropic’s service restrictions preventing commercial access to Claude in China, the three firms allegedly engaged commercial proxy services to sidestep Anthropic’s restrictions, enabling access to networks running tens of thousands of Claude accounts simultaneously.

“Once access is secured, the labs generate large volumes of carefully crafted prompts designed to extract specific capabilities from the model,” Anthropic said in the statement.

Claude’s responses to these prompts are farmed en masse either for direct training of the Chinese models or for a process known as reinforcement learning, a data-intensive approach in which AI models learn decision-making through trial and error in the absence of human guidance.

Anthropic estimated that the three Chinese firms collectively generated over 16 million exchanges with Claude from around 24,000 fraudulently created accounts. Of the three firms, Anthropic found that MiniMax drove the most traffic, with over 13 million exchanges.

DeepSeek, Moonshot AI and MiniMax have yet to respond to a request for comment from CNBC.

Not the first time

Anthropic joins a growing chorus of American companies expressing concerns over distillation from Chinese AI firms.

Earlier this month, Sam Altman’s OpenAI submitted an open letter to U.S. legislators, claiming to have observed activity “indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other US frontier labs, including through new, obfuscated methods.”

The company has flagged evidence of distillation by Chinese firms since early last year, with the launch of China’s first DeepSeek model, which users found strikingly similar to ChatGPT, the Financial Times reported in Jan. 2025, citing OpenAI insiders.

Distillation, however, is not uncommon in the industry, as Anthropic acknowledged in its Monday statement that AI firms “routinely distill their own models to create smaller, cheaper versions.”

Anthropic’s allegations are likely less about industry malpractice than about violations of its terms of service, said Lia Raquel Neves, founder of ethics consultancy EITIC.

“If Anthropic itself recognizes that distillation is a legitimate and widely used practice … then the central point of controversy lies not only in the technique itself, but in the alleged fraudulent access and possible violation of contractual terms and access restrictions,” Neves added.

The company, however, expressed concerns about the competitive advantage rival firms would gain, as the practice can be used “to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

In their respective statements, Anthropic and OpenAI have framed distillation by these Chinese firms as national security threats.

Like OpenAI, which described DeepSeek’s practices as “adversarial distillation,” Anthropic expressed concern over the possibility of “authoritarian governments deploy[ing] frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

A war of narratives

It remains unclear, however, how much these statements reflect genuine security concerns over a desire to preserve the competitive lead of America’s AI corporations, according to experts.

Given the general acceptance of distillation as a legitimate practice in the AI industry, “the boundary between legitimate use and adversarial exploitation is often blurry,” Erik Cambria, professor of artificial intelligence from Singapore’s Nanyang Technological University, told CNBC.

Some online users were quick to point out the similarities between Anthropic’s claims and its own use of distillation to train proprietary models.

Anthropic has long framed “compute leadership as a national security priority,” consistently advocating for tighter export controls of advanced AI chips to China, according to Rui Ma from boutique consulting firm Tech Buzz China.

“Whether intentional or not, the narrative of illicit capability transfer strengthens the case for stricter chip restrictions,” Ma added.

As the debate about national security and export controls unfolds amid a broader context of global competition and substantial financial investments in AI, it is important to separate real security risks from broader strategic narratives,” said EITIC’s Neves.

“This does not invalidate the existence of real [security] risks, but it does require analytical prudence to distinguish between … [different] narratives,” she said.

On the same day as Anthropic’s statement, Reuters reported that the U.S. had found evidence that DeepSeek had trained its AI model on Nvidia’s flagship Blackwell chip, apparently flouting export controls, according to anonymous senior officials.

Such reports add fuel to concerns from an administration that appears increasingly anxious about China’s rapid advances in the AI industry, especially as China’s gains reportedly stem from the use of American-developed systems.

Last Friday, the White House announced the establishment of the Peace Corps, an initiative within the Peace Corps aimed at promoting American AI interests abroad and helping partner nations adopt cutting-edge systems.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top