The Python community just lived through a supply chain nightmare that exposes how fragile our trust model really is. On March 24, 2026, versions 1.82.7 and 1.82.8 of litellm, a popular library that unifies access to multiple LLM providers, arrived on PyPI containing credential-stealing malware. The kicker? You didn’t need to import litellm for your secrets to be siphoned. The attack triggered the moment Python started up.
This wasn’t a sophisticated zero-day exploit. It was a .pth file, a legitimate Python mechanism turned into a perfect delivery vehicle for silent, persistent malware.
The Attack That Didn’t Need Your Permission
The discovery came from a developer who noticed their laptop "ran out of RAM, it looked like a forkbomb was running" after installing what appeared to be a routine update. Investigation revealed a 34,628-byte file named litellm_init.pth sitting in the package’s site-packages directory.
Here’s what made this attack particularly insidious:
litellm_init.pth,sha256=ceNa7wMJnNHy1kRnNCcwJaFjWX3pORLfMh7xGL8TUjg,34628
The .pth file contained a single line that executed on every Python interpreter startup:
import os, subprocess, sys, subprocess.Popen([sys.executable, "-c", "import base64, exec(base64.b64decode('...'))"])
Double base64-encoded to evade naive detection. No import required. No obvious code changes in the main package. Just automatic execution through a mechanism most developers have never consciously encountered.
The Data Harvesting Operation
When decoded, the payload revealed a systematic exfiltration operation targeting virtually every credential store on a system:
| Target Category | Specific Files/Commands |
|---|---|
| System Intelligence | hostname, whoami, uname -a, ip addr, ip route |
| Environment Variables | Full printenv dump (API keys, tokens, secrets) |
| SSH Keys | ~/.ssh/id_rsa, id_ed25519, id_ecdsa, authorized_keys, config |
| Cloud Credentials | AWS (~/.aws/credentials, IMDS), GCP (~/.config/gcloud/), Azure (~/.azure/) |
| Kubernetes | ~/.kube/config, service account tokens, admin configs |
| Git Credentials | ~/.gitconfig, ~/.git-credentials |
| Docker Registries | ~/.docker/config.json, /kaniko/.docker/config.json |
| Package Managers | ~/.npmrc, ~/.vault-token, ~/.netrc, ~/.pgpass |
| Shell History | .bash_history, .zsh_history, .psql_history, .rediscli_history |
| Crypto Wallets | Bitcoin, Ethereum, Solana, Monero, Cardano keystores |
| SSL/TLS Keys | /etc/ssl/private/, Let’s Encrypt certificates |
| CI/CD Secrets | terraform.tfvars, .gitlab-ci.yml, Jenkinsfile, ansible.cfg |
The collected data was encrypted with AES-256 using a randomly generated session key, then that key was encrypted with a hardcoded 4096-bit RSA public key. The final payload was exfiltrated to https://models.litellm.cloud/, a domain carefully chosen to mimic the legitimate litellm.ai but controlled by attackers.
Why .pth Files Are the Perfect Attack Vector
Python’s .pth mechanism was designed for legitimate purposes: adding additional directories to sys.path and executing initialization code. They’re processed automatically when the interpreter starts, before any user code runs. This makes them ideal for:
- Persistence: Survives reboots, reinstalls of the main package
- Stealth: Executes without any explicit import
- Evasion: Often missed by security tools focused on package code
- Scope: Affects any Python process on the system, not just applications using the compromised package
The OWASP Top 10 for LLM Applications (2025) ranks supply chain vulnerabilities as the third-highest risk, noting that "supply chain security for AI demands the same rigor as traditional software, from verified sources and signed artifacts to dependency scanning and runtime monitoring." This incident proves that demand is far from being met.

The Verification Gap: Why We Keep Failing
The litellm compromise highlights a fundamental architectural failure: we have no reliable way to verify that the code on PyPI matches the code in the source repository.
PyPI’s current security model relies on:
– Package signing (optional, rarely used correctly)
– Basic malware scanning (easily evaded with encoding)
– Community reporting (reactive, not preventive)
– Version pinning (doesn’t help when the pinned version is compromised)
What we don’t have:
– Mandatory reproducible builds
– Cryptographic attestation of source-to-binary correspondence
– Runtime behavioral verification
– Automated .pth file analysis
The 2025 OWASP update specifically calls out that "new fine-tuning methods like LoRA and PEFT also introduce attack vectors that the 2023 edition didn’t address." But the litellm attack didn’t use novel ML techniques, it exploited basic Python packaging mechanics that have been understood for decades.
Defensive Architecture: What Actually Works
Immediate Detection
For those who may have installed the compromised versions:
# Check for malicious .pth files
find $(python -c "import site, print(site.getsitepackages()[0])") -name "*.pth" -exec grep -l "subprocess\|exec\|__import__" {} \;
# Verify litellm specifically
pip show litellm | grep Version
# If 1.82.7 or 1.82.8: rotate ALL credentials, check for litellm_init.pth
Structural Defenses
1. Private Registry Proxying
Tools like JFrog Artifactory provide "proxy public registries, preventing direct download from internet" with the ability to "identify and block vulnerable packages." The key feature is curation with attestation, not just caching, but verifying and signing approved artifacts.
2. Dependency Freezing with Hash Verification
# requirements.txt with hashes
litellm==1.82.6 \
--hash=sha256:abc123... \
--hash=sha256:def456...
This prevents silent upgrades to compromised versions but requires maintaining a trusted hash database.
3. Runtime Monitoring
The payload’s behavior, spawning subprocesses, reading sensitive files, making network requests, exhibits clear signatures that runtime security tools can detect. Harness’s artifact security approach emphasizes "validating artifact integrity using checksums or cryptographic signatures after builds" and maintaining "a detailed chain of custody, from build creation to deployment."
4. Sandboxed Installation
Containerized or virtualized environments with restricted filesystem access can limit the blast radius. The litellm malware attempted to read host SSH keys and cloud credentials, operations that should fail in properly isolated build environments.
The Deeper Problem: Trust Without Verification
This attack surfaces a tension at the heart of open-source software. We routinely execute code from strangers with root-equivalent access to our development environments and production systems. The social contract, that maintainers act in good faith, that the community will catch malicious actors, breaks down when:
- Attackers compromise maintainer credentials (likely what happened here)
- Popular packages change hands without scrutiny
- Automated dependency updates bypass human review
- Security tooling focuses on known vulnerabilities, not novel attack vectors
The security risks hidden within open models parallel this problem: we assume transparency equals safety, when in fact complexity enables concealment. Similarly, verifying tools claiming local privacy requires looking past surface claims to actual behavior.
What PyPI Could Do (But Hasn’t)
| Measure | Implementation Complexity | Effectiveness |
|---|---|---|
| Mandatory 2FA for all publishers | Low | Prevents credential-based account takeover |
| Reproducible build verification | Medium | Ensures source matches binary |
| Signed package requirements | Medium | Cryptographic provenance chain |
| Automated .pth analysis | Low | Flags suspicious initialization code |
| Publisher reputation scoring | Medium | Risk-based installation warnings |
| Runtime behavioral sandboxing | High | Prevents actual malicious execution |
The current state: 2FA is encouraged but not universally enforced. Signing exists but is poorly adopted. Reproducible builds are not required. Behavioral analysis is minimal.
The Cost of Convenience
The litellm library exists because developers wanted a unified interface to multiple LLM providers. The compromise succeeded because developers wanted easy installation (pip install litellm) without friction. These desires are reasonable. The failure is in the infrastructure that doesn’t protect reasonable desires from exploitation.
Organizations now face a unmonitored security surface area in automation that grows faster than our ability to secure it. Every pip install, every npm install, every cargo add is a trust decision with potentially catastrophic consequences.
Concrete Recommendations
For Individual Developers
- Pin dependencies with hashes using
pip-compile --generate-hashes - Use virtual environments with restricted network access for installation
- Audit
.pthfiles in your Python environments:find . -name "*.pth" -exec cat {} \; - Consider tools like Safety for known vulnerability scanning
For Organizations
- Implement private package registries with approval workflows
- Require SBOM generation and verification for all dependencies
- Deploy runtime application security protection (RASP) to detect anomalous behavior
- Segment build environments with minimal credential exposure
For the Ecosystem
- Advocate for mandatory signing and reproducible builds on PyPI
- Support initiatives like Sigstore for transparent, auditable software supply chains
- Contribute to security-focused package managers and verification tools
The Uncomfortable Truth
The litellm attack wasn’t sophisticated. It used well-known Python mechanisms, basic encoding, and a typo-squatted domain. It succeeded because our defenses are optimized for known threats, not for fundamental architectural vulnerabilities in how we distribute and execute code.
Until we build verification into the fabric of package management, cryptographic attestation, reproducible builds, behavioral sandboxing, we’re relying on the digital equivalent of neighborhood watch in an era of organized crime. The community will catch some attacks. Others will harvest secrets for months before detection.
The .pth file that stole your API keys didn’t need your permission. It just needed your trust.




