Anthropic Reveals 24,000+ Fake Accounts Used by Chinese AI Firms to Distill Claude
  • February 24, 2026
  • Sreekanth bathalapalli
  • 0

Anthropic Reveals Chinese AI Firms’ Massive Theft: 24,000+ Fake Accounts Used to Distill Claude AI – Escalating US-China AI Rivalry 2026

BREAKING: In a major development shaking the global AI ecosystem, Anthropic — the American company behind the powerful Claude AI models — has accused three leading Chinese AI laboratories of running large-scale, illicit operations to steal its proprietary technology. According to Anthropic’s official announcement on February 23, 2026, companies DeepSeekMoonshot AI (known for Kimi models), and MiniMax created over 24,000 fraudulent accounts to generate more than 16 million interactions with Claude. These “distillation attacks” were designed to extract and replicate Claude’s advanced capabilities — including agentic reasoning, tool use, coding prowess, and complex problem-solving — to accelerate their own AI development.

This revelation comes amid heightened US-China tensions in artificial intelligence, where export controls on advanced chips aim to preserve America’s technological edge. For Non-Resident Indians (NRIs), tech professionals, investors, and students following global innovation trends, this incident highlights the fierce competition reshaping the future of AI — and its implications for international tech access, careers, and investments.

Understanding the Allegations: How the Distillation Attacks Worked

Anthropic’s detailed blog post, titled “Detecting and Preventing Distillation Attacks”, describes a sophisticated, coordinated effort by the three Chinese firms. Distillation involves using outputs from a superior “teacher” model (here, Claude) to train a smaller, more efficient “student” model. While legitimate in controlled settings, Anthropic says these campaigns crossed into outright violation of its terms of service and US-imposed regional restrictions.

Key tactics uncovered include:

  • Mass Creation of Fake Accounts — Over 24,000 accounts were set up using proxy services, VPNs, and automated scripts to bypass Anthropic’s ban on commercial access in China and for certain entities.
  • High-Volume Querying — The labs amassed 16+ million exchanges, far beyond typical user behavior, allowing them to harvest vast datasets of Claude’s responses.
  • Focused Extraction Campaigns:
    • DeepSeek reportedly ran over 150,000 targeted interactions to boost foundational reasoning, alignment, and even responses to sensitive topics.
    • Moonshot AI focused on agentic capabilities, tool integration, coding agents, and multimodal features through millions of exchanges.
    • MiniMax emphasized replicating Claude’s strengths in advanced reasoning and data handling, contributing significantly to the total volume.

These operations evaded detection for extended periods by rotating accounts, varying query patterns, and using diverse access methods. Anthropic stresses that such tactics undermine national security measures, as distilled models could bypass safeguards and feed into applications with military, intelligence, or surveillance implications.

Why This Matters in the Broader US-China AI Competition

The US has implemented strict export controls on high-end AI hardware (e.g., NVIDIA GPUs) to slow rivals’ ability to train frontier models independently. Distillation offers a clever bypass: instead of building massive compute infrastructure, attackers indirectly access restricted technology via APIs.

Anthropic frames this as a direct challenge to those controls:

“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.”

This isn’t the first time concerns have arisen. In November 2025, Anthropic disrupted what it described as the first AI-orchestrated cyber espionage campaign, linked with high confidence to a Chinese state-sponsored actor using Claude for automated attacks on global targets. While separate, these events fuel worries about unchecked access enabling misuse.

For NRIs in tech hubs like the US, Silicon Valley, Canada, Europe, and the Middle East — many working at AI firms, startups, or research institutions — this raises questions about:

  • Job market shifts as Chinese competitors close the gap faster than anticipated.
  • Investment opportunities in AI stocks and startups amid escalating rivalry.
  • Visa and immigration policies if geopolitical tensions intensify around technology transfer.

Spotlight on the Accused Chinese AI Players

  • DeepSeek — A fast-rising open-source champion offering cost-effective, high-performance models that have challenged Western dominance in benchmarks.
  • Moonshot AI — Known for its Kimi series, which excels in long-context reasoning, coding, and agentic tasks. Heavily funded and focused on user-facing AI tools.
  • MiniMax — A multimodal specialist claiming competitive parity with global leaders, emphasizing versatility across text, vision, and more.

These firms operate in China’s aggressive AI ecosystem, backed by substantial domestic investment and government priorities to achieve AI supremacy by 2030.

Broader Implications for Innovation, Security, and the Global Tech Landscape

This incident spotlights several critical trends:

  1. Narrowing Capability Gap — Chinese models are advancing rapidly, sometimes through methods that shortcut independent R&D.
  2. Defensive Measures Needed — Anthropic is bolstering protections with improved anomaly detection, rate limiting, proxy blocking, and potential output watermarking.
  3. Policy and Regulatory Push — Greater calls for industry-wide collaboration, tighter API governance, and possibly new international norms on AI intellectual property.
  4. Risk to Incentives — If frontier models can be cheaply replicated, it could discourage the enormous investments required for breakthroughs.

Critics have pointed out parallels: Western AI firms have faced lawsuits over unauthorized web scraping for training data. However, the deliberate fraud, scale, and circumvention of export rules distinguish this case.

What Comes Next in This Escalating AI Race?

Anthropic warns that these campaigns are “growing in intensity and sophistication,” urging a coordinated response from AI companies, cloud providers, and policymakers. The “window to act is narrow” before such tactics become standard.

For the NRI community — whether you’re a software engineer in Hyderabad eyeing opportunities abroad, an investor tracking AI stocks, or a student pursuing AI/ML degrees — staying informed on these developments is crucial. The outcome could influence everything from H-1B visa policies and tech job markets to global supply chains and innovation leadership.

At NriGlobe.com, we bring you the latest on global Indian news, technology trends impacting NRIs, US policy updates, career insights, and more. Follow us for in-depth coverage of how geopolitical shifts in AI and tech affect the diaspora.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *