CVE-2025-46722

4.2 MEDIUM

📋 TL;DR

This vulnerability in vLLM's image hashing function allows hash collisions where different-sized images with identical pixel data produce the same hash. This can cause incorrect cache hits, potentially leading to data leakage or serving wrong images. Users running vLLM versions 0.7.0 through 0.8.x for multimodal LLM inference are affected.

💻 Affected Systems

Products:
  • vLLM
Versions: 0.7.0 to 0.8.x
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects systems using vLLM's multimodal capabilities with image processing. Text-only deployments are unaffected.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

An attacker could craft specially sized images to trigger hash collisions, causing the system to serve incorrect cached responses, potentially leaking sensitive image data or causing denial of service through cache poisoning.

🟠

Likely Case

Accidental hash collisions from legitimate image processing workflows cause incorrect cache hits, leading to serving wrong images or degraded performance.

🟢

If Mitigated

With proper input validation and monitoring, impact is limited to occasional cache misses or minor performance degradation.

🌐 Internet-Facing: MEDIUM
🏢 Internal Only: LOW

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: NO
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Exploitation requires understanding of image formats and ability to submit images to the vLLM system. No public exploits exist.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.9.0

Vendor Advisory: https://github.com/vllm-project/vllm/security/advisories/GHSA-c65p-x677-fgj6

Restart Required: Yes

Instructions:

1. Upgrade vLLM to version 0.9.0 or later using pip: pip install vllm>=0.9.0
2. Restart all vLLM services
3. Clear any existing image caches to remove potentially corrupted entries

🔧 Temporary Workarounds

Disable image caching

all

Temporarily disable image hashing/caching in vLLM configuration to prevent collisions

Set cache_config.enable_image_cache = false in vLLM configuration

🧯 If You Can't Patch

  • Implement external image validation to detect size mismatches before processing
  • Monitor cache hit rates and implement alerts for abnormal patterns

🔍 How to Verify

Check if Vulnerable:

Check vLLM version: python -c "import vllm; print(vllm.__version__)" - if between 0.7.0 and 0.8.x, you're vulnerable

Check Version:

python -c "import vllm; print(vllm.__version__)"

Verify Fix Applied:

After upgrade, verify version is 0.9.0+: python -c "import vllm; print(vllm.__version__)"

📡 Detection & Monitoring

Log Indicators:

  • Unusual cache hit patterns for images
  • Multiple images with same hash but different metadata

Network Indicators:

  • Increased cache-related errors in API responses

SIEM Query:

vllm AND (cache_hit OR hash_collision OR image_error)

🔗 References

📤 Share & Export