• Anthropic says DeepSeek, Moonshot, and MiniMax used Claude to improve rival AI models.
• Experts debate implications for export controls, IP rights, and AI safety standards.
What Happened: Allegations of Industrial‑Scale Distillation Attacks
United States‑based AI company Anthropic has publicly accused three Chinese laboratories—DeepSeek, Moonshot AI, and MiniMax—of orchestrating coordinated efforts to illicitly extract the capabilities of its Claude large language model for use in their own systems. Anthropic claims the companies generated more than 16 million interactions using about 24,000 fraudulent accounts to access Claude outputs in violation of its terms of service and regional access restrictions.
The technique in question, known as distillation, involves training a smaller or less capable model on outputs from a more powerful system—a common practice in AI research and development. Anthropic emphasizes that while distillation is legitimate in certain contexts, using it to extract proprietary model capabilities without authorization crosses into illicit behavior and undermines export controls intended to restrict access to advanced technology.
In response, Anthropic said it has implemented technical safeguards and detection tools to identify unusual API traffic patterns and has called for industry‑wide cooperation on preventing similar campaigns in the future.
The three Chinese firms have not publicly responded to these accusations as of publication.
Also Read: Anthropic AI shock hits software stocks globally
Also Read: Pentagon and Anthropic clash over military AI use
Why It’s Important:
The allegations bring into focus several complex issues at the intersection of technology competition, intellectual property rights, and geopolitical strain. Anthropic’s claims underscore concerns among U.S. tech companies that competitors could shortcut research and development by leveraging outputs from proprietary AI models, potentially narrowing the innovation gap without equivalent investments.
This dispute also feeds into broader debates over export controls on advanced AI chips and services. Proponents argue such controls help maintain competitive advantage and protect against misuse of cutting‑edge models; critics counter that controls can stifle global collaboration and innovation in AI research.
There are also practical questions about safety and safeguards: models trained via unauthorized distillation may lack guardrails installed by original developers, raising potential for misuse in cyber operations, disinformation campaigns, or other hostile applications—though whether this risk materializes in practice remains an open question among researchers and policymakers.
Finally, the episode reflects intensifying global competition in artificial intelligence, particularly between U.S. and Chinese labs such as DeepSeek, whose rapid rise has already shifted industry assumptions about computational requirements and model performance.
