LiteLLM Supply Chain Attack: A Wake-Up Call for AI Security

AI Bot
By AI Bot ·

Loading the Text to Speech Audio Player...

On March 24, 2026, two malicious versions of the Python package LiteLLM were published to PyPI. In just 40 minutes, a credential-stealing malware was downloaded by thousands of CI/CD pipelines worldwide. Here is a deep dive into an attack that exposes the fragility of the AI software supply chain.

LiteLLM: A Critical Piece of AI Infrastructure

LiteLLM is an open-source Python library that unifies API calls to over 100 language model providers (OpenAI, Anthropic, Azure, AWS Bedrock, and more) through a single interface. With 97 million monthly downloads and an estimated presence in 36% of cloud environments, LiteLLM has become a core component of many production AI architectures.

That popularity makes it a prime target for attackers.

Attack Timeline

The LiteLLM compromise was part of a broader campaign orchestrated by the TeamPCP threat group:

  • Late February 2026: Initial compromise of the Trivy repository (Aqua Security's vulnerability scanner)
  • March 19: Trivy compromise wave via stolen CI credentials
  • March 21: KICS GitHub Action tags poisoned
  • March 24, 10:39 UTC: Malicious versions litellm==1.82.7 and litellm==1.82.8 published to PyPI
  • March 24, ~11:19 UTC: Packages quarantined after approximately 40 minutes
  • March 27: Telnyx Python SDK compromised by the same group
  • March 30: Clean version 1.83.0 released via a rebuilt CI/CD pipeline

Inside the Malware

The compromised versions contained two payloads:

The .pth File Launcher (v1.82.8)

A file named litellm_init.pth was dropped into site-packages. Python .pth files have a dangerous property: they are executed automatically on every Python interpreter startup. The malware used subprocess.Popen to launch its collection processes.

Ironically, the implementation contained a fatal bug. Each child process triggered the same .pth file, creating an exponential fork bomb that pegged CPU at 100%. This behavior is precisely what led to the attack being discovered quickly — researcher Callum McMahon noticed his "machine stuttering hard" with htop taking tens of seconds to load.

The Modified Proxy Server (v1.82.7 and v1.82.8)

A tampered proxy_server.py file contained code designed to:

  • Harvest environment variables (API keys, cloud tokens)
  • Extract SSH keys and Git credentials
  • Steal Kubernetes tokens and database passwords
  • Collect AWS, GCP, and Azure credentials
  • Exfiltrate data via encrypted POST requests to models.litellm[.]cloud, an attacker-controlled domain

The Domino Effect: From Trivy to Mercor

The attack perfectly illustrates cascading risk in modern software supply chains:

  1. Trivy (a security scanning tool) is compromised
  2. CI/CD credentials stolen via Trivy grant access to LiteLLM's publishing pipeline
  3. Malicious LiteLLM versions infect thousands of downstream environments
  4. Mercor, an AI recruiting startup, confirms impact on April 2, 2026

The Lapsus$ extortion group claims possession of 4TB of Mercor data, including candidate profiles, personal information, source code, API keys, and video interview recordings. The data is reportedly being auctioned on underground forums.

Who Was Actually Exposed?

According to LiteLLM's post-incident analysis, impacted users are those who:

  • Installed or upgraded LiteLLM via pip on March 24 between 10:39 and 16:00 UTC
  • Used unpinned dependencies (no fixed version in requirements.txt)
  • Ran CI/CD pipelines that automatically install the latest package versions

Users of the official LiteLLM Docker proxy image were unaffected, as dependencies are pinned in the requirements.txt.

Lessons for Securing Your AI Dependencies

This attack highlights essential practices that are often overlooked in the AI ecosystem:

1. Pin Your Dependencies

# Bad practice
litellm
 
# Good practice
litellm==1.82.6

Never let pip install fetch the latest version without control. Use a requirements.txt with exact versions and verify SHA-256 checksums.

2. Use a Private Registry or Package Proxy

Tools like Artifactory, Nexus, or AWS CodeArtifact let you filter and scan packages before they reach your environments. Configure retention policies and automated validation.

3. Audit Your CI/CD Pipeline

  • Isolate CI runners with restricted network access
  • Use ephemeral credentials instead of long-lived tokens
  • Enable multi-factor authentication on all publishing accounts (PyPI, npm, etc.)
  • Monitor unusual outbound connections from your pipelines

4. Maintain an SBOM (Software Bill of Materials)

Generate and maintain a complete inventory of your dependencies using tools like Syft or CycloneDX. In case of an incident, you can quickly identify affected components.

5. Monitor Network Behavior in Production

The LiteLLM malware exfiltrated data to an unofficial domain. Monitoring outbound connections would have triggered an immediate alert. Solutions like Falco or Tracee detect this type of anomalous runtime behavior.

The Growing Attack Surface of AI

The AI ecosystem relies on a stack of open-source dependencies rarely audited with the rigor they deserve. LiteLLM is just one example — provider SDKs, agent orchestration frameworks, vector database libraries — each represents a potential attack vector.

With the massive adoption of autonomous AI agents in enterprise, the consequences of a supply chain compromise go beyond code theft. A compromised agent can:

  • Access internal systems through stolen credentials
  • Modify production code without human oversight
  • Exfiltrate sensitive business data

If you use LiteLLM in your projects, here are the steps to take:

  • Check whether litellm==1.82.7 or 1.82.8 was installed in your environments
  • Search for the litellm_init.pth file in your site-packages directories
  • Block outbound traffic to models.litellm[.]cloud and checkmarx[.]zone
  • Rotate all secrets: API keys, cloud tokens, SSH keys, database credentials
  • Upgrade to litellm>=1.83.0, which was released via a fully rebuilt CI/CD pipeline

Conclusion

The LiteLLM attack is a wake-up call for every organization building with AI. Software supply chain security is no longer a topic reserved for DevSecOps teams — it is a strategic concern for every developer, architect, and technical decision-maker.

In an ecosystem where a PyPI package compromised for 40 minutes can affect thousands of companies, blind trust in open-source dependencies is no longer an option.


Want to read more blog posts? Check out our latest blog post on Vibe Coding Examples for Real Teams.

Discuss Your Project with Us

We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.

Let's find the best solutions for your needs.