Anthropic on Feb. 23 said it identified industrial-scale distillation campaigns by three China-based AI labs — DeepSeek, Moonshot AI (known for its Kimi models) and MiniMax — that collectively generated more than 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of the company’s terms of service and regional access restrictions.
The three labs all rank among the top 15 models on the Artificial Analysis performance leaderboard and are sometimes called China’s “AI tigers.” None of the three had responded publicly to Anthropic’s allegations or to media requests for comment as of publication.
How the campaigns were built
The campaigns, Anthropic said in its blog post, used a consistent technical architecture: fraudulent account clusters routed through commercial proxy services that resell API access to Claude and other frontier models at scale.
The company described the infrastructure as “hydra cluster” architectures — sprawling account networks spread across its API and third-party cloud platforms such that banning one account creates no gap, because another immediately takes its place.
In one documented case, a single proxy network managed more than 20,000 accounts simultaneously, interweaving distillation traffic with unrelated customer requests to confuse detection systems.
Anthropic said it attributed each campaign with high confidence through IP address correlation, request metadata, infrastructure indicators and, in some cases, corroboration from industry partners who observed the same actors on their own platforms.
In DeepSeek’s case, Anthropic said request metadata allowed it to trace accounts to specific researchers at the lab.
DeepSeek, Moonshot and MiniMax: What each targeted
The three campaigns differed significantly in scale and targeting. DeepSeek’s, the smallest by volume at more than 150,000 exchanges, focused on reasoning tasks, rubric-based grading that turned Claude into a reward model for reinforcement learning and — notably — the generation of censorship-safe alternatives to politically sensitive queries about dissidents, party leaders and authoritarianism, according to Anthropic’s blog post.
The company said those prompts were designed to train DeepSeek’s own models to steer conversations away from topics censored by the Chinese government.
Moonshot AI’s campaign, at more than 3.4 million exchanges, targeted agentic reasoning, tool use, coding, data analysis, computer-use agent development and computer vision. Anthropic attributed it partly through request metadata that matched the public profiles of senior Moonshot staff.
MiniMax’s campaign was the largest, at more than 13 million exchanges targeting agentic coding, tool use and orchestration, and Anthropic said it detected the operation while still active, before MiniMax had released the model being trained.
When Anthropic launched a new Claude model during that window, MiniMax redirected nearly half its traffic to the updated system within 24 hours, according to the blog post.
A pattern across U.S. labs
The disclosure lands in a broader pattern of similar accusations from other U.S. labs. In a Feb. 12, 2026, memo to the House Select Committee on the Chinese Communist Party, OpenAI accused DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs,” describing “new, obfuscated methods” designed to evade its detection systems and routes through third-party services to mask account origins.
OpenAI began raising concerns about distillation internally shortly after DeepSeek’s R1 launch in January 2025, when it opened a probe with Microsoft. Google’s Threat Intelligence Group has separately reported disrupting model extraction attempts against Gemini, including a campaign of more than 100,000 prompts aimed at coercing reasoning traces, per a Feb. 13 Google disclosure.
The enforcement gap: Export controls and copyright
Anthropic used the Feb. 23 blog post to make an explicit policy argument: that distillation attacks reinforce the case for export controls on advanced AI chips, because executing extraction at that scale requires significant compute.
“In reality, these advancements depend in significant part on capabilities extracted from American models,” the company wrote, “and executing this extraction at scale requires access to advanced chips.”
The argument directly supports export-restriction positions that Anthropic CEO Dario Amodei has advanced publicly, including in a January 2025 essay where he called export controls “the most important determinant of whether we end up in a unipolar or bipolar world.”
The same day Anthropic published its disclosure, Reuters reported that a senior Trump administration official said DeepSeek had trained an upcoming model on Nvidia’s Blackwell chips despite existing export restrictions.
That hardware enforcement picture is contested. At a Feb. 24 House Foreign Affairs subcommittee hearing, Rep. Sydney Kamlager-Dove said she had seen “different reports” citing conflicting figures for H200 sales to Chinese end users. David Peters, assistant secretary of commerce for export enforcement, replied that his “understanding” was that none had been sold to Chinese end users so far.
The legal tools available to pursue the alleged extraction are limited, independent legal analysts told multiple outlets. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship and that providing prompts does not render AI-generated outputs copyrightable.
Winston & Strawn attorneys, writing in March 2025 and cited in a VentureBeat analysis of the allegations, noted that even where distillation can be proven, a lab likely does not hold copyright over its own model outputs — a structural limitation that applies equally to Anthropic’s situation. The enforced restrictions in this case are contractual, not copyright-based, meaning remedies would depend on terms-of-service enforcement rather than IP law.
Some AI researchers have also pushed back on how Anthropic and OpenAI have framed the activity. Erik Cambria, a professor of artificial intelligence at Singapore’s Nanyang Technological University, told CNBC that “the boundary between legitimate use and adversarial exploitation is often blurry,” given that distillation is standard practice across the industry, including at U.S. labs.
RAND associate political scientist Austin Horng-En Wang told Rest of World that one possible reason for public disclosure now is to build a case against Chinese companies acquiring more chips — noting the timing’s alignment with the chip export debate rather than any new enforcement mechanism.
How Anthropic says it is responding
Anthropic said it is responding with behavioral fingerprinting classifiers designed to detect coordinated distillation patterns, including chain-of-thought elicitation.
The company has tightened verification requirements for educational accounts, security research programs and startup pathways, which it described as the access routes most commonly exploited for fraudulent account setup.
The company is also developing API and model-level safeguards intended to reduce the utility of extracted outputs for training, and it’s sharing technical indicators with other AI labs, cloud providers and relevant authorities.
“No company can solve this alone,” Anthropic wrote in the blog post, calling for “a coordinated response across the AI industry, cloud providers, and policymakers.”