BLOG

Hackers Used Prompt Injection to Turn a GitHub Bot Into a Malware Installer — 4,000 Machines Infected

M MegaOne AI Apr 1, 2026 Updated Apr 2, 2026 4 min read
Engine Score 7/10 — Important
Illustration for post 2564
  • Attackers exploited a prompt injection vulnerability in Cline’s AI-powered GitHub triage bot to compromise approximately 4,000 developer machines.
  • The attack chain involved injecting malicious instructions via a GitHub issue title, stealing npm credentials, and publishing a compromised package.
  • Security researcher Adnan Khan discovered and reported the vulnerability 40 days before the attack, but received no response from the Cline team.
  • The compromised package installed an unauthorized AI agent with full system access via a single-line change in package.json.

What Happened

On February 17, 2026, an attacker published a compromised version of the Cline VS Code extension package ([email protected]) to npm. The malicious package included a postinstall hook that silently installed OpenClaw, an AI agent with full system access, on approximately 4,000 developer machines during an eight-hour window before detection.

The attack exploited a prompt injection vulnerability in Cline’s AI-powered issue triage system. Security researcher Adnan Khan had discovered and reported the flaw on January 1, 2026, via GitHub Security Advisory. After 40 days without an adequate response, Khan disclosed publicly on February 9. Cline patched the triage workflow within 30 minutes of disclosure, but the attacker weaponized Khan’s proof-of-concept eight days later using credentials stolen during the vulnerability window.

Why It Matters

This is one of the first documented cases of a prompt injection attack being used to execute a full supply chain compromise. The incident demonstrates that AI-powered automation in software development pipelines introduces attack surfaces that traditional security tools are not designed to catch.

The attack succeeded because Cline’s AI triage workflow was configured with allowed_non_write_users: "*", meaning any GitHub user could submit issues that would be processed by the AI bot. The bot passed unsanitized input directly to Anthropic’s Claude model, which then executed the attacker’s instructions.

The incident also exposed a gap in disclosure response practices. Khan reported the vulnerability through proper channels on January 1, followed up multiple times through February 9, and received no substantive response. The 40-day delay between responsible disclosure and patch gave attackers time to study and weaponize the vulnerability.

Technical Details

The exploit followed a five-step chain. First, the attacker created GitHub Issue #8904 with embedded malicious instructions in the title targeting Cline’s AI triage workflow. Second, Claude executed npm install commands referencing glthub-actions/cline, a typosquatted fork with a missing letter. Third, a preinstall script deployed “Cacheract,” a cache poisoning tool that flooded GitHub Actions cache with over 10 GB of junk data, triggering eviction of legitimate entries.

Fourth, the compromised cache exfiltrated three secrets: NPM_RELEASE_TOKEN, VSCE_PAT, and OVSX_PAT. Fifth, the attacker used the stolen npm token to publish the compromised package. The executable remained byte-identical to the legitimate version; only one line in package.json changed.

StepSecurity’s automated monitoring flagged the malicious package approximately 14 minutes after publication. Standard tools missed the attack entirely: npm audit did not flag OpenClaw because it is technically a legitimate package, and code review missed the single-line change.

Who’s Affected

Approximately 4,000 developers who installed or updated the Cline package during the eight-hour window received the compromised version. The postinstall hook executed silently, meaning affected developers may not have known OpenClaw was running on their machines with full system access. The scope of potential damage depends on what permissions and credentials were accessible on each compromised machine.

The broader developer community using AI-powered bots in CI/CD pipelines should also take note. Any workflow that passes untrusted user input to an LLM without sanitization is potentially vulnerable to similar attacks. Open-source maintainers who use AI bots for issue triage, code review, or automated responses face particular risk if their configurations allow public input to reach the model without filtering.

What’s Next

Developers can audit their systems for compromised postinstall hooks using npm query ":attr(scripts, [postinstall])". The incident has renewed calls for npm provenance attestations, OIDC tokens instead of long-lived secrets, and strict restrictions on which users can trigger AI-powered workflows.

Cline removed its AI triage system entirely after the attack. Credential rotation was completed by February 11, though an initial rotation on February 10 mistakenly deleted the wrong token, leaving the stolen credentials active for an additional day. As of March 2026, npm has not implemented mandatory provenance attestations that would have prevented the unauthorized publish.

The incident remains a reference case for the security risks of integrating LLMs into software supply chain tooling without input sanitization, privilege constraints, and rapid disclosure response processes.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy