The LiteLLM Supply Chain Attack: What Every AI Builder Needs to Know

AI Security · Supply Chain · Incident Analysis

The LiteLLM Supply Chain Attack: What Every AI Builder Needs to Know

On March 24, 2026, malicious PyPI packages silently stole credentials from tens of thousands of AI developers. Here is how it happened, what it stole, and what you must do right now.

Supply ChainAI SecurityPyPIIncident ResponseDevSecOps

🚨
Active Incident — TeamPCP Campaign Ongoing as of March 28, 2026
If you installed any LiteLLM version between 10:39–16:00 UTC on March 24, treat your environment as compromised and rotate all credentials immediately. The telnyx PyPI package (v4.87.1 / v4.87.2) was also hit on March 27 — the campaign is expanding.

95M
Monthly PyPI Downloads
5.5h
Exposure Window
300GB
Data Claimed Exfiltrated
5+
Ecosystems Compromised

If you build AI applications in Python, there is a very good chance LiteLLM is somewhere in your stack — or in the stack of a tool you depend on. It is the de facto universal adapter for LLM APIs: one library that speaks to OpenAI, Anthropic, AWS Bedrock, Google VertexAI, and over a hundred other providers. It touches your .env files, your API keys, your Kubernetes secrets. That is exactly why it was targeted.

On March 24, 2026, two versions of LiteLLM were quietly pushed to PyPI. They looked legitimate. They passed no official CI/CD process. They contained a multi-stage credential stealer that executed silently in the background of every Python process on infected machines. By the time the community raised the alarm, the window had been open for over five hours.

This is not just an incident report. It is a blueprint for how the next wave of AI supply chain attacks will work — and why your current defenses are probably not sufficient.

—   The Attack Timeline   —

From a Vulnerability Scanner to 95 Million Downloads

This attack did not begin with LiteLLM. It began weeks earlier, as part of a coordinated campaign by a threat group known as TeamPCP, which emerged in late 2025 and has since systematically targeted the developer tooling ecosystem.

Mar 1, 2026First breach
Trivy (Aqua Security) — a popular open-source vulnerability scanner — is compromised. Attackers gain a foothold in CI/CD infrastructure. Aqua rotates credentials, but the rotation is not atomic: valid tokens briefly coexist with new ones during the changeover window, handing the attacker a live credential.

Mar 19, 2026Escalation
Trivy GitHub Actions are compromised. Attackers abuse mutable release references and trusted CI/CD workflows to move laterally. The attacker likely used a token captured during Trivy’s incomplete credential rotation window.

Mar 23, 2026Expansion
Checkmarx VS Code extensions and GitHub Actions are hit using the same playbook. The campaign is now clearly targeting developer security tooling — software that runs with broad, trusted access by design.

Mar 24, 2026Main event
LiteLLM v1.82.7 and v1.82.8 are uploaded to PyPI at 10:39 and 10:52 UTC, bypassing all official release workflows. A malicious .pth file executes automatically on every Python startup. The window closes at ~16:00 UTC when versions are yanked from PyPI.

Mar 27, 2026Still expanding
Telnyx PyPI package (3.75M+ downloads) is the latest casualty. Compromised versions 4.87.1 and 4.87.2 carry the same cloud-secret exfiltration pattern. The C2 server is 83[.]142.209.203. The campaign is ongoing.

“TeamPCP did not need to attack LiteLLM directly. They compromised Trivy, a vulnerability scanner running inside LiteLLM’s CI pipeline without version pinning. That single unmanaged dependency handed over the PyPI publishing credentials — and from there the attacker backdoored a library that serves 95 million downloads per month. One dependency. One chain reaction. Five supply chain ecosystems compromised in under a month.”

— Jacob Krell, Senior Director for Secure AI Solutions, Suzu Labs

—   Technical Deep Dive   —

What the Malware Actually Did

The payload was layered, encrypted, and persistent. Understanding it mechanically is the first step toward building defenses that actually work.

01
Silent Execution
Malicious .pth file auto-runs on every Python process startup — no import needed.
02
Credential Harvest
Sweeps SSH keys, cloud tokens, API keys, Kubernetes configs, and shell history.
03
Encrypted Exfil
AES-256-CBC + RSA-4096 encrypted bundle POSTed to attacker-controlled domain.
04
Persistence
Kubernetes backdoor pod + local systemd service survive package removal.

Stage 01 — Silent Execution via .pth Files

Python’s .pth file mechanism is a legitimate feature for extending the module search path. When the malicious LiteLLM was installed, it dropped a file called litellm_init.pth into the site-packages directory. Python executes .pth files automatically on startup — no import needed, no user action required. Every Python process on the machine became an execution vector.

litellm_init.pth — malicious .pth file (simplified)
# What a normal .pth file looks like (benign)
/home/user/myproject/src

# What litellm_init.pth contained
import base64; exec(base64.b64decode('..obfuscated multi-stage payload..'))

💡 Why .pth files are dangerous
Unlike a regular import, .pth files are processed by Python’s site module at interpreter startup — before any user code runs. This means the malware executed in CI runners, Docker builds, local dev environments, and production containers alike, simply because LiteLLM was installed.

Stage 02 — Credential Harvesting

The decoded payload systematically swept the host for every secret a modern AI developer keeps on their machine. Its target list was comprehensive by design — LiteLLM was chosen precisely because it sits adjacent to all of these:

⚠️ Credential Targets

  • SSH private keys (~/.ssh/)
  • Cloud provider credentials — AWS access keys, GCP Application Default Credentials, Azure tokens
  • Kubernetes configs (~/.kube/config) and service account tokens
  • API keys from .env files and environment variables
  • Database connection strings and passwords
  • Shell history files — which frequently contain secrets passed as CLI arguments

Stage 03 — Encrypted Exfiltration

Collected data was not sent in plaintext. The malware used a hardcoded 4096-bit RSA public key to encrypt a random AES-256-CBC session key, then encrypted the harvested data with that session key, bundled everything into a tar archive, and POSTed it to https://models.litellm.cloud — a convincingly named domain that is not part of legitimate LiteLLM infrastructure.

Check if you are affected — run these commands now
# 1. Check your installed version
pip show litellm | grep Version

# 2. Look for the malicious .pth file in site-packages
find $(python -c "import site; print(site.getsitepackages()[0])") \
  -name "litellm_init.pth" 2>/dev/null

# 3. Check for persistence mechanisms
ls ~/.config/sysmon/sysmon.py 2>/dev/null
ls ~/.config/systemd/user/sysmon.service 2>/dev/null

# 4. Purge package cache to prevent re-infection from cached wheels
pip cache purge
# Or if using uv:
rm -rf ~/.cache/uv

Stage 04 — Lateral Movement & Kubernetes Persistence

This is where the attack shifts from credential theft to full infrastructure control. If a Kubernetes service account token was present, the malware read all cluster secrets across all namespaces and attempted to create a privileged alpine:latest pod on every node in kube-system. Each pod mounted the host filesystem and installed a persistent backdoor at /root/.config/sysmon/sysmon.py via a systemd user service.

⚠️ Critical

  • Removing the package alone is not sufficient. The malware establishes persistence that survives uninstallation.
  • If running Kubernetes, audit kube-system for pods named node-setup-* and review cluster secrets for unauthorized access.

—   The Bigger Picture   —

Why AI Tooling Is the New High-Value Attack Surface

This attack is not a one-off. It is proof of concept for a systematic strategy targeting the AI software supply chain. Security-adjacent tooling has broad, trusted access by design — compromising one tool hands the attacker everything that tool was trusted to touch.

TeamPCP Threat Actors
Compromise one unpinned CI/CD dependency to steal PyPI publish credentials

Trivy — Aqua Security Vulnerability Scanner
Trusted tool, runs with broad access. Non-atomic credential rotation creates a live capture window.

LiteLLM v1.82.7 / v1.82.8 on PyPI
95M monthly downloads. Accesses every LLM API key, .env file, and cloud credential in your stack.

Your Dev Machine / CI Runner / Kubernetes Cluster
SSH keys, cloud credentials, API keys, DB passwords — harvested, encrypted, exfiltrated.

“What makes it especially notable is that the LiteLLM compromise appears to have been downstream fallout from the earlier Trivy breach — meaning attackers may have used one trusted CI/CD compromise to poison another widely used AI-layer dependency. That is exactly the kind of cascading, transitive risk security teams worry about most.”

— Cory Michal, CISO, AppOmni

—   Incident Response   —

If You Were Affected: Stop, Rotate, Audit

🚨 You are likely affected if…

  • You ran pip install litellm on March 24, 2026 between 10:39–16:00 UTC
  • You received v1.82.7 or v1.82.8 without a pinned version
  • You built a Docker image during this window with an unpinned pip install litellm
  • A dependency in your project pulled LiteLLM in transitively — e.g. via AI agent frameworks, MCP servers, or LLM orchestration tools

✅ You are not affected if…

  • You are running the official LiteLLM Proxy Docker image (the pre-built image was not compromised)
  • Your version was pinned to anything other than v1.82.7 or v1.82.8
  • You had no pip installs or upgrades during the 10:39–16:00 UTC window on March 24

Immediate Response Checklist

  • Remove LiteLLM v1.82.7 and v1.82.8 from all environments and purge pip/uv caches
  • Delete ~/.config/sysmon/sysmon.py and ~/.config/systemd/user/sysmon.service if present
  • If running Kubernetes: audit kube-system for pods named node-setup-* and review all cluster secrets
  • Rotate all credentials on any affected machine — SSH keys, cloud provider keys, API keys, database passwords
  • Review outbound network logs for connections to models.litellm.cloud
  • Treat affected CI/CD runners as fully compromised — rotate secrets and rebuild clean base images
  • Engage forensic analysis if production infrastructure or customer data systems were exposed

—   Systemic Defenses   —

Hardening Your AI Dependency Chain

Incident response addresses the immediate crisis. The structural problem is that most AI development workflows are built for speed, not security. Here is how to change that without killing velocity.

01 — Pin Your Dependencies. Always.

The single most impactful mitigation for supply chain attacks is version pinning. An unpinned pip install litellm will silently pull whatever the latest version is. A pinned install will not.

pip — safe dependency management
# Vulnerable: always pulls latest, no verification
pip install litellm

# Safe: locked to a specific verified version
pip install litellm==1.82.6

# Even better: lock file with hash verification
pip install --require-hashes -r requirements.txt

02 — Deploy a Dependency Firewall for PyPI

Tools like Sonatype Repository Firewall, Socket.dev, or a private package mirror with allowlisting can intercept malicious packages before they reach your environment. This is the automated equivalent of a security review at the point of ingestion — and it would have blocked this attack in real time.

03 — Apply Zero-Trust Principles to Your Build Pipeline

  • Pin all GitHub Actions to specific commit SHAs, not mutable tags like @v3
  • Use ephemeral, short-lived credentials in CI (OIDC tokens where possible — never static API keys)
  • Perform atomic credential rotation: revoke old credentials before activating new ones — a rotation window is an attack window
  • Separate build environments from environments with production secret access
  • Audit and minimize the secrets accessible in each individual pipeline job

04 — Scan New Dependencies Before Merging

Integrate automated supply chain scanning into your PR process. Tools like pip-audit, Snyk, or Grype can flag known-malicious packages before they land in your main branch.

pip-audit — scan your environment
# Install and run pip-audit against your current environment
pip install pip-audit
pip-audit

# Audit a specific requirements file with hash verification
pip-audit --require-hashes -r requirements.txt

05 — Treat AI Credentials Like Production Secrets From Day One

Okta Threat Intelligence warned in 2025 that rapid AI agent adoption was creating “identity debt” — developers connecting AI agents directly to production resources with static, long-lived tokens stored in plaintext config files. This attack is that prediction materializing. The fix is governance: short-lived tokens, scoped access, and a formal review process before any AI component touches production data.

—   Threat Landscape   —

Attack Surface Mapping: Why AI Tooling Is Targeted

TeamPCP has exclusively targeted security-adjacent and AI-adjacent tooling. The pattern is intentional — these tools run with broad access because that is how they function. The following table maps each tool type to its blast radius when compromised:

Tool / Component Access Level Attacker Gain on Compromise Risk
LiteLLM All LLM API keys, env vars Every AI provider credential in the stack Critical
Trivy CI/CD pipeline, image registries Build secrets, registry credentials, downstream packages Critical
Telnyx SDK Cloud secrets, telephony APIs Communication infrastructure access High
VS Code Extensions IDE filesystem, git credentials Source code, all local secrets, commit signing keys High
AI Agent Frameworks Production APIs, databases Data exfiltration, unauthorized downstream actions Critical

—   Conclusion   —

The AI Supply Chain Is Now a Primary Attack Surface

The LiteLLM attack is not an isolated incident. It is the clearest signal yet that threat actors have identified the AI software supply chain as a high-leverage, underdefended target. LLM gateway libraries, agent frameworks, and AI developer tools all share a common characteristic: they are adopted fast, often without security review, and they sit in privileged positions between applications and sensitive infrastructure.

The organizations most at risk are not necessarily the least sophisticated. They are the most experimental — teams moving quickly to build with AI that have not yet established the governance practices that security-mature organizations apply to production dependencies. The blast radius of this attack is largely a story about AI adoption outpacing AI security.

📌 Key Takeaways for Engineering and Security Leaders

  • Every open-source AI library in your stack is a potential entry point — treat them with the same diligence as production code
  • CI/CD pipelines are now a primary attack vector — audit the trust chain of every action and dependency in your build process
  • Cascading supply chain attacks are the new normal — a breach in one tool can propagate to everything downstream
  • AI gateway libraries and agent tools have broad, privileged access by design — making them the highest-value targets in your environment
  • Atomic credential rotation is non-negotiable — a rotation window is an attack window

This article was written on March 28, 2026, four days after the incident. The TeamPCP campaign is ongoing. For the latest indicators of compromise, follow ReversingLabs and Help Net Security. If you believe your systems are affected, contact LiteLLM via their official security disclosure channel and engage your incident response team immediately.

Sources: LiteLLM official security update (docs.litellm.ai), FutureSearch technical analysis (futuresearch.ai), Sonatype Security Research, ReversingLabs TeamPCP campaign tracking, Help Net Security, Okta Threat Intelligence, AppOmni CISO statement.

Found this useful? Share it with your network.
Help protect the AI developer community — one share can prevent another engineer from getting hit.


Share on LinkedIn

Exploring AI security? Discover more supply chain research, incident analysis, and deep technical posts on my portfolio →

Leave a Reply

Your email address will not be published. Required fields are marked *