CVE-2026-22807

8.8 HIGH

📋 TL;DR

This vulnerability allows arbitrary code execution on vLLM servers during model loading. Attackers who can influence the model repository or path (local directory or remote Hugging Face repo) can execute malicious Python code at server startup. This affects vLLM versions 0.10.1 through 0.13.x.

💻 Affected Systems

Products:
  • vLLM
Versions: 0.10.1 through 0.13.x
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerable when loading models from any source (local or remote) that contains auto_map configuration.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete compromise of the vLLM host with full remote code execution, allowing data theft, lateral movement, and persistent backdoor installation.

🟠

Likely Case

Attacker gains initial foothold on the server, potentially leading to data exfiltration, cryptocurrency mining, or credential harvesting.

🟢

If Mitigated

No impact if proper controls prevent untrusted model sources and the system is patched.

🌐 Internet-Facing: HIGH - If vLLM servers are internet-facing and load models from user-controlled sources, exploitation is straightforward.
🏢 Internal Only: MEDIUM - Internal attackers or compromised internal systems could exploit this if they can influence model paths.

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires control over model repository/path but no authentication to the vLLM API. The vulnerability triggers during server startup before any API requests.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.14.0

Vendor Advisory: https://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr

Restart Required: Yes

Instructions:

1. Update vLLM to version 0.14.0 or later using pip: pip install --upgrade vllm>=0.14.0
2. Restart all vLLM services
3. Verify the fix by checking the version

🔧 Temporary Workarounds

Restrict model sources

all

Only load models from trusted, verified sources and avoid user-controlled model paths.

Network segmentation

all

Isolate vLLM servers from the internet and restrict outbound connections to Hugging Face.

🧯 If You Can't Patch

  • Implement strict access controls on model directories and repositories
  • Run vLLM in isolated containers with minimal privileges and network access

🔍 How to Verify

Check if Vulnerable:

Check vLLM version: python -c "import vllm; print(vllm.__version__)" - if version is between 0.10.1 and 0.13.x, system is vulnerable.

Check Version:

python -c "import vllm; print(vllm.__version__)"

Verify Fix Applied:

After patching, verify version is 0.14.0 or higher and test loading a model with auto_map configuration from a controlled source.

📡 Detection & Monitoring

Log Indicators:

  • Unexpected Python module imports during model loading
  • Errors related to auto_map or trust_remote_code
  • Unusual process execution from vLLM context

Network Indicators:

  • Unexpected outbound connections from vLLM servers
  • Downloads from untrusted Hugging Face repositories

SIEM Query:

process.name:vllm AND (process.cmd_line:*auto_map* OR process.cmd_line:*trust_remote_code*)

🔗 References

📤 Share & Export