Anthropic dropped a bombshell on February 24 that reshapes how we think about AI competition. The company revealed that three Chinese AI labs ran industrial-scale operations to extract Claude's capabilities through a technique called distillation, using approximately 24,000 fraudulent accounts to generate over 16 million exchanges with their flagship model. The accused companies are DeepSeek, Moonshot AI, and MiniMax.

Understanding Distillation Attacks
Model distillation is a legitimate machine learning technique where a smaller "student" model learns from the outputs of a larger "teacher" model. Researchers use it to create more efficient versions of expensive models. However, when done without authorization and at massive scale, it becomes a form of intellectual property extraction.
The accused labs reportedly used coordinated networks of accounts to systematically query Claude across specific capability areas. They then used these responses to fine-tune their own models, essentially transferring Anthropic's research investments into their own systems. This bypasses the enormous costs of developing frontier AI capabilities from scratch.
What makes this particularly concerning is the sophistication involved. These were not random queries. According to Anthropic's analysis, each lab targeted specific capabilities with clear training objectives in mind.
Breaking Down the Three Campaigns
The scale and focus of each campaign reveals strategic priorities.
DeepSeek's operation included over 150,000 interactions targeting reasoning capabilities, reinforcement learning techniques via rubric-based grading, and notably, generating "censorship-safe alternatives" to sensitive queries. This last detail is telling. It suggests DeepSeek was specifically trying to extract methods for handling content that might trigger Chinese regulatory concerns.
Moonshot AI ran more than 3.4 million exchanges focused on agentic reasoning, tool use, coding and data analysis, and computer vision capabilities. Their emphasis on agentic capabilities suggests a focus on AI systems that can take autonomous actions, a rapidly growing area of competitive development.
MiniMax drove the largest volume with over 13 million exchanges. While Anthropic provided fewer details about MiniMax's specific targets, the sheer scale suggests broad capability extraction rather than focused specialization.
Why This Matters for AI Development
This disclosure arrives at a sensitive moment. The United States is actively debating AI chip export controls, and these revelations add fuel to arguments for tighter restrictions. Anthropic explicitly connected the dots, warning about "authoritarian governments deploying frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."
For AI practitioners and organizations deploying frontier models, this raises serious questions about operational security. If major labs with significant resources are running these extraction campaigns, what protections exist for enterprises using AI APIs? The answer, unfortunately, is limited.
Model providers now face a difficult balance. Too many restrictions frustrate legitimate users. Too few enable exactly this type of abuse. Anthropic says it has implemented detection systems, intelligence sharing with industry partners, strengthened access controls, and model-level countermeasures. But the cat-and-mouse nature of this problem means defenses will always lag behind novel attack vectors.
The Distillation Problem Has No Easy Solutions
From a technical standpoint, preventing distillation at scale is extremely challenging. Every API response potentially teaches something to an observer. Rate limiting helps but determined actors can distribute queries across thousands of accounts. Response watermarking offers partial detection but remains an active research area. Behavioral analysis can flag suspicious patterns, but sophisticated attackers adapt.
The more fundamental issue is economic. Training frontier models costs hundreds of millions of dollars. Distillation costs orders of magnitude less. This asymmetry creates persistent incentives for extraction, particularly for organizations that face resource constraints or sanctions limiting their access to advanced compute.
Regional Implications for the Middle East
For those of us working in AI across the Gulf region, this development underscores the importance of sovereign AI capabilities. The UAE's investments in domestic AI infrastructure, including partnerships with major cloud providers and local compute buildout, become even more strategically relevant when global AI competition involves tactics like these.
Organizations deploying AI systems should conduct thorough due diligence on model provenance. Understanding where capabilities originate, and whether they were developed through legitimate means, matters for both ethical and practical reasons. Models built through distillation may lack the safety guardrails present in properly trained systems.
Looking Forward
Anthropic's disclosure will likely trigger industry-wide responses. Expect other major labs to share similar findings, as OpenAI has already made parallel accusations against DeepSeek. We may see coordinated technical countermeasures emerge, new terms of service provisions, and potentially regulatory attention.
The broader lesson is clear: frontier AI capabilities have become strategic assets worthy of sophisticated extraction efforts. Protecting them requires treating AI security with the same seriousness as traditional cybersecurity. For AI practitioners, this means thinking carefully about how we expose model capabilities, monitor for abuse patterns, and collaborate across organizations to maintain the integrity of the AI ecosystem.
The competition for AI dominance is no longer just about who can train the best models. It is also about who can protect them.