ANALYSIS

Cursor’s Composer 2 Uses Moonshot AI’s Kimi K2.5, API Traffic Reveals

M Marcus Rivera Mar 24, 2026 Updated Apr 7, 2026 4 min read
Engine Score 8/10 — Important

This story reveals a significant breach of trust and transparency in the open-source AI ecosystem, impacting developers and companies relying on such tools. It highlights critical concerns about the supply chain and geopolitical implications of AI model origins.

Editorial illustration for: Cursor's Composer 2 Found to Be Built on Chinese AI Model Kimi K2.5

A developer known publicly as Fynn (X: @fynnso) discovered in late March 2026 that Cursor‘s Composer 2 — launched on March 19, 2026 and marketed by Cursor as delivering “frontier-level coding intelligence” — routes its AI requests to Kimi K2.5, an open-source model built by Chinese startup Moonshot AI. Fynn identified the connection by intercepting API traffic generated by the Cursor desktop application and tracing live requests to Kimi K2.5’s infrastructure. His post on X accumulated 2.6 million views within days of Composer 2’s launch. Full name details for Fynn were not available at time of publication.

  • Fynn intercepted outbound API traffic from the Cursor desktop application and traced requests to Kimi K2.5 infrastructure operated by Moonshot AI.
  • Cursor marketed Composer 2 as “frontier-level coding intelligence” without disclosing that the feature runs on a fine-tuned open-source model from a Chinese AI lab.
  • Moonshot AI confirmed Cursor was an authorized user of Kimi K2.5, but acknowledged the partnership had not been publicly announced before Fynn’s disclosure.
  • As of April 2, 2026, Cursor has issued no formal statement addressing the findings.

What Happened

On March 19, 2026, Cursor launched Composer 2, describing it in marketing materials as capable of delivering “frontier-level coding intelligence.” Developer Fynn, posting on X at @fynnso, disclosed shortly after launch that API traffic from the Cursor desktop application was being routed to infrastructure associated with Kimi K2.5 — an open-source model released by Moonshot AI, a Chinese AI startup.

Fynn’s post spread rapidly, accumulating 2.6 million views within days of the launch. Cursor did not respond publicly as the findings circulated. Moonshot AI subsequently confirmed that Cursor was an authorized user of Kimi K2.5, stating the arrangement was legitimate — but acknowledged that no public announcement of the partnership had been made before Fynn’s disclosure.

Why It Matters

The incident exposes a recurring gap in AI product marketing: the distance between how products are described to users and what models actually power them. Cursor’s framing of Composer 2 — “frontier-level coding intelligence” — gave no indication of whether the underlying system was proprietary, licensed, or open-source. Users had no reliable mechanism to determine which model was processing their code before Fynn’s traffic interception made the routing visible.

The episode also reflects the growing competitiveness of Chinese open-source AI models within Western commercial products. Kimi K2.5 is reported to benchmark competitively with proprietary Western coding models, and early user reception to Composer 2 was described as positive. That performance outcome does not resolve the disclosure question, but it illustrates why cost-effective open-source models from Chinese labs are increasingly attractive as backends for developer tool companies.

Technical Details

Fynn’s investigation required no privileged access to Cursor’s internal systems. By intercepting outbound API traffic from the Cursor desktop application — a technique available to any developer using standard network monitoring tools — he identified the endpoint receiving requests as Kimi K2.5 infrastructure operated by Moonshot AI.

Kimi K2.5 is released under an open-source license that permits commercial use without royalty obligations, making it a financially practical backend for companies building AI developer tools at scale. Cursor’s deployment is described as a fine-tuned version of the base Kimi K2.5 model, adapted for the Cursor editor’s specific workflows and interface conventions rather than deployed as an unmodified base release. Specific benchmark figures comparing Kimi K2.5 to proprietary alternatives were not disclosed in Cursor’s launch materials.

Who’s Affected

Cursor’s user base — developers relying on the tool for AI-assisted coding in professional environments — is most directly affected by the absence of model disclosure. For enterprise developers working on proprietary codebases or under data governance and IP confidentiality policies, the identity of the model processing their code carries practical implications beyond brand preference.

The case also applies pressure to the broader AI developer tool market. Competitors including GitHub Copilot and Tabnine similarly do not publish comprehensive model provenance information. Cursor’s visibility in this segment means the transparency question raised here is unlikely to remain isolated to a single product or company.

What’s Next

Cursor had not issued a formal public statement addressing the Kimi K2.5 disclosure as of April 2, 2026. Moonshot AI’s confirmation that the partnership was authorized clarifies the legal dimension of the arrangement, but neither company has directly addressed why the model identity was not disclosed alongside the product launch.

The incident is expected to intensify calls for standardized model provenance disclosure across AI developer tool platforms. No industry-wide standard currently requires AI product companies to disclose which models process user data, leaving developers dependent on methods like Fynn’s traffic interception to determine what runs beneath the tools they use in their daily workflows.

Related Reading

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime