WEBINAR
GigaOm Radar Report for PTaaS: How to Make a Smarter Investment in Pentesting
WEBINAR
GigaOm Radar Report for PTaaS: How to Make a Smarter Investment in Pentesting

LLM Supply Chain Attack: Prevention Strategies

LLM supply chain attack vulnerabilities have emerged as a top risk for machine learning models. Third-party training data, pre-trained models, and LLM plugin extensions complicate the vulnerabilities inherent in traditional software supply chains. Unchecked, these vulnerabilities can generate biased output, security breaches, or system shutdowns.

In this article, we'll share what security teams need to know about LLM supply chain attacks and how to prevent them. We'll cover:

  • What LLM supply chain attacks are
  • Examples of LLM supply chain attacks
  • Common vulnerabilities of LLM supply chains
  • How to secure LLMs against supply chain attacks

What Is an LLM Supply Chain Attack?

LLM supply chain attacks target vulnerabilities in the components, libraries, tools, and processes used to build large language learning models. These include vulnerabilities associated with traditional software supply chains as well as LLM-specific supply chain elements, such as third-party:

  • Training data
  • Pre-trained models
  • LLM plugin extensions

LLM supply chain attacks can expose models to numerous risks. Consequences may include sensitive data breaches, model output manipulation, and denials of service.

Examples of LLM Supply Chain Attacks

To illustrate what LLM supply chain attacks look like, let's consider some real-world examples:

OpenAI Python Library Bug

In March 2023, OpenAI temporarily took ChatGPT offline after discovering a bug in the Redis Python client open-source client library redis-py. OpenAI had been using the library to interface with Redis in order to cache user information. Flawed synchronization between the incoming request queue and outgoing response queue caused some users to see titles and initial messages from other active user's chat history. In some instances, the bug exposed user payment information, though fortunately not full credit card numbers.

Upon investigation, the bug originated in the Asyncio redis-py client for Redis Cluster. OpenAI fixed the bug and performed testing to confirm the fix.

Python Package Index (PyPI) Code Repository Dependency Chain Abuse

In December 2022, bad actors targeted the PyTorch machine learning library in a type of attack known as dependency chain abuse. Dependency chain abuse attacks deceive software engineers or build systems into downloading and executing malicious packages disguised as legitimate required software.

In this instance, attackers exploited the fact that many open-source libraries do not use namespacing to protect software, allowing anyone to register any package name. The perpetrators went to the Python pypi.org registry and registered a high version number for a package called torchitron used in the PyTorch-nightly nightly build. The package contained malicious code which read files and exfiltrated data back to a command and control server.

The compromised package affected users who installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022. Users of stable releases were unaffected.

PyTorch advised affected users to uninstall PyTorch-nightly in order to remove the compromised torchitron package. They fixed the problem by renaming the torchitron package and reserving the name.

ChatGPT Plugin Vulnerabilities

In March 2024, API security provider Salt published research exposing a security flaw that allowed ChatGPT extensions to install malicious plugins on ChatGPT users and take over third-party accounts in the ChatGPT ecosystem. The vulnerability stemmed from ChatGPT's ability to use plugins to interact with third-party services such as Google Drive, GitHub, and SalesForce. Allowing ChatGPT to send sensitive data to these services and access accounts on them created the potential for ChatGPT to be used to take over external accounts.

To begin addressing this vulnerability, OpenAI introduced custom versions of ChatGPT called GPTs with narrow use cases involving less third-party dependency. They also removed the ability of ChatGPT users to interact with existing plugins and install new plugins.

Hugging Face Safetensors Compromise

In February 2024, AI security provider HiddenLayer published research showing how hackers could compromise the Safetensors conversion service used by pre-trained machine learning provider Hugging Face. Hugging Face developed the Safetensors format to fix a vulnerability in the popular Pickle serialization format, known to allow arbitrary code execution. To convert Pickle data files into Safetensors format, Hugging Face uses pull requests.

Unfortunately, researchers discovered that the conversion service was containerized in a place on the Hugging Face platform where any user could run code. This not only allowed the conversion service to be hacked, but it allowed the compromised service to use malicious pull requests to infect any repository on the platform and hijack training models submitted for conversion.

Hugging Face received an alert about the discovered vulnerability prior to its public disclosure. In follow-up security tests, Hugging Face detected unauthorized access to its Spaces platform.

Common Vulnerabilities Found in LLM Supply Chain

LLM supply chain vulnerabilities can be classified into a few major categories, including:

  1. Traditional third-party supply-chain vulnerabilities
  2. Pre-trained model vulnerabilities (model poisoning)
  3. Poisoned training data vulnerabilities (data poisoning)
  4. Outdated model dependencies
  5. Terms and conditions, data privacy, and copyright oversights

1. Traditional Third-party Supply-chain Vulnerabilities

LLM supply chains may suffer from the same vulnerabilities that plague traditional supply chains. Misconfigurations, outdated patches, and zero-day vulnerabilities in sources such as third-party libraries may introduce risks into LLM ecosystems.

2. Pre-trained Model Vulnerabilities (Model Poisoning)

Pre-trained models can release vulnerabilities into the LLM supply chain. Poisoned pre-trained models can contain triggers that open backdoors, giving attackers unauthorized access to systems.

3. Poisoned Training Data Vulnerabilities (Data Poisoning)

Crowdsourced training data also runs the risk of poisoning. Bad actors may introduce malicious values to compromise data aggregates, with intended results such as maximizing estimation errors.

4. Outdated Model Dependencies

LLM supply chains may suffer vulnerabilities stemming from reliance on models that no longer receive security updates. This can occur when models are outdated or deprecated.

5. Terms and Conditions, Data Privacy, and Copyright Oversights

Flaws in third-party providers' terms and conditions (T&Cs) and data privacy policies may result in sensitive data being introduced into model training. Similarly, suppliers may introduce copyrighted material into supply chains, opening plagiarism liabilities.

How Can You Secure Your LLM against Supply Chain Attacks?

You can reduce the risk of LLM supply chain attacks by following best practices. Here are ten of the most important strategies to implement:

  1. Screen data sources and suppliers
  2. Use reputable plugins
  3. Mitigate vulnerable and outdated components
  4. Maintain component inventories
  5. Secure LLM models
  6. Use model and code signing
  7. Run anomaly detection and adversarial robustness tests
  8. Monitor vulnerabilities, plugins, and outdated components
  9. Apply patching policies
  10. Monitor supplier security and access

1. Screen Data Sources and Suppliers

Keep bad data out of your supply chain by selectively vetting sources and suppliers of your data. Your review should include close examination of T&Cs and privacy policies. Make sure third-party partners' policies align with your own. For example, confirm that your sensitive data is not used for training models. Verify that sources and suppliers safeguard against copyright violations.

2. Use Reputable Plugins

Restrict plugin use to recommended plugins consistent with industry standards and tested for your intended application. Well-designed plugins should include controls for parameter input, input validation and sanitization, access control, authentication identities, and other vulnerabilities. The Open Worldwide Application Security Project (OWASP) provides detailed guidance on avoiding insecure plugin design.

3. Mitigate Vulnerable and Outdated Components

Avoid contamination of your supply chain from insecure and obsolete components. Mitigation strategies include removing unused dependencies, using secure sources, scanning for vulnerabilities, and maintaining patches. OWASP provides guidelines for mitigating vulnerable and outdated components.

4. Maintain Component Inventories

Secure your supply chain by deploying a Software Bill of Materials (SBOM) listing an inventory of your third-party components with their version and patch status. Keeping your inventory up-to-date can reduce your exposure to zero day vulnerabilities.

5. Secure LLM Models

SBOMs typically don't cover LLM models, their datasets, or their artifacts. If you're using your own LLM model, maintain security by following best practices for securing model repositories and tracking data, models, and experiments.

6. Use Model and Code Signing

Digital signatures ensure publisher authenticity and software integrity. Use model and code signing to protect your supply chain from counterfeit and tampered elements.

7. Run Anomaly Detection and Adversarial Robustness Tests

Scanning for anomalies and testing for vulnerabilities can protect your supply chain from poisoned data. Key mitigation strategies include data and model validation, secure data storage, data separation, access controls, anomaly monitoring, and adversarial robustness tests. OWASP provides best practices for preventing data poisoning attacks.

8. Monitor Vulnerabilities, Plugins, and Outdated Components

Vigilant monitoring protects your supply chain against bad elements. Comprehensive monitoring should encompass scans for vulnerabilities, unauthorized plugins, and outdated components. Both models and artifacts should be monitored.

9. Apply Patching Policies

Good patching policies can help you avoid supply chain contamination from vulnerable or outdated components. Make sure all patches have been applied to APIs and models in your supply chain.

10. Monitor Supplier Security and Access

Monitoring should extend to supplier security and access. Conduct regular reviews and audits to verify that suppliers have not modified their security policies or T&Cs.

Implement Pentesting to Fortify Your LLM Supply Chain Security

To apply the LLM supply chain security strategies recommended here, penetration testing provides an invaluable aid. By simulating attacks on your LLM supply chain, pentesting allows you to identify and fix vulnerabilities before bad actors can exploit them.

Cobalt provides next-generation LLM and AI penetration testing services to help you mitigate risks to your supply chain and other attack surfaces in your LLM and AI applications. The Cobalt Core, an elite group of highly-vetted penteters works with OWASP and leading security authorities to keep ahead of emerging LLM vulnerabilities and develop effective mitigation strategies.

Our platform makes it easy for you to set up tests in collaboration with our on-demand network of pentesters, allowing you to scale up your testing as needed. Connect with Cobalt to learn how we can help you level up your LLM supply chain security.

Back to Blog
About Andrew Obadiaru
Andrew Obadiaru is the Chief Information Security Officer at Cobalt. In this role Andrew is responsible for maintaining the confidentiality, integrity, and availability of Cobalt's systems and data. Prior to joining Cobalt, Andrew was the Head of Information Security for BBVA USA Corporate Investment banking, where he oversaw the creation and execution of Cyber Security Strategy. Andrew has 20+ years in the security and technology space, with a history of managing and mitigating risk across changing technologies, software, and diverse platforms. More By Andrew Obadiaru