ANALYSIS

Trump Science Advisor Cites Industrial-Scale Chinese AI Distillation in Memo

E Elena Volkov Apr 24, 2026 3 min read
Engine Score 8/10 — Important
Editorial illustration for: Trump Science Advisor Cites Industrial-Scale Chinese AI Distillation in Memo
  • White House science advisor Michael Kratsios issued a memo stating foreign actors are using tens of thousands of proxy accounts and jailbreaking techniques to distill capabilities from U.S. frontier AI models.
  • The distillation process produces smaller models that match large U.S. systems on select benchmarks at a fraction of the development and operating cost, with safety protocols stripped in the process.
  • The Trump administration plans to share intelligence about these campaigns with U.S. AI companies, develop joint technical countermeasures, and explore action against those responsible.
  • Anthropic, OpenAI, and Google have each previously raised concerns about coordinated attacks targeting their models’ internal reasoning chains.

What Happened

Michael Kratsios, President Trump’s science and technology advisor, issued a memo identifying systematic, large-scale campaigns—attributed primarily to Chinese actors—to extract and replicate capabilities from American frontier AI models, according to a report published by The Decoder on April 24, 2026. The memo describes the use of tens of thousands of proxy accounts combined with jailbreaking techniques to conduct what Washington characterizes as industrial-scale model distillation. Kratsios does not name specific Chinese models but points to open-source and open-weight releases that the U.S. government believes were built on illegally distilled American research.

Why It Matters

Distillation is an established technique in machine learning: a smaller “student” model is trained to replicate the outputs of a larger “teacher” model, substantially reducing compute requirements. The concern Kratsios raises is not with the method itself—his memo explicitly acknowledges legitimate distillation as a valid part of the AI ecosystem—but with systematic, state-backed campaigns using commercial API access as a covert capability transfer mechanism. Anthropic, OpenAI, and Google have all previously flagged coordinated probing of their systems, but this marks the first formal government-level framing of those incidents as a national security matter.

Technical Details

According to the Kratsios memo, attackers specifically target models’ chains of thought—the internal, step-by-step reasoning processes that allow frontier models to handle complex tasks. These reasoning chains are produced through reinforcement learning, a technically demanding and data-intensive training process representing significant R&D investment by U.S. labs. The resulting distilled systems are described as matching the performance of large U.S. models on select benchmarks while operating at a fraction of the development and compute cost. Kratsios also states that safety protocols are stripped from models during the distillation process, removing behavioral constraints built into the originals. In the memo’s framing:

“There is nothing innovative about systematically extracting and copying the innovations of American industry, and there is nothing open about supposedly open models that are derived from acts of malicious exploitation.”

Who’s Affected

U.S. frontier AI developers—including Anthropic, OpenAI, and Google—are the primary targets identified in the memo. Downstream developers and enterprises relying on open-weight models that the U.S. government suspects were derived through distillation could face scrutiny as the administration works to distinguish legitimate from illegitimate model lineage. The memo signals increased pressure on U.S. AI providers to monitor and restrict API access patterns consistent with large-scale systematic distillation.

What’s Next

The Trump administration outlined four planned responses in the memo: sharing intelligence about distillation campaigns directly with U.S. AI companies, tightening public-private sector cooperation, developing joint technical countermeasures, and exploring punitive action against those responsible. No specific legislative proposals or regulatory timelines accompanied the memo. Kratsios characterized the stripping of safety guardrails during distillation as undermining mechanisms designed to keep AI systems “ideologically neutral” and “truth-seeking,” framing the issue as encompassing both economic and values-based dimensions of U.S. AI policy.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime