CVE-2025-23254

8.8 HIGH

📋 TL;DR

This vulnerability in NVIDIA TensorRT-LLM allows attackers with local access to the TRTLLM server to exploit a data validation issue, potentially leading to remote code execution, information disclosure, or data tampering. It affects all platforms running vulnerable versions of TensorRT-LLM. The high CVSS score indicates significant security risk.

💻 Affected Systems

Products:
  • NVIDIA TensorRT-LLM
Versions: All versions prior to the patched release
Operating Systems: All platforms supported by TensorRT-LLM
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerability exists in the python executor component and requires local access to the TRTLLM server.

⚠️ Manual Verification Required

This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.

Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).

🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.

Recommended Actions:
  1. Review the CVE details at NVD
  2. Check vendor security advisories for your specific version
  3. Test if the vulnerability is exploitable in your environment
  4. Consider updating to the latest version as a precaution

⚠️ Risk & Real-World Impact

🔴

Worst Case

Full system compromise with attacker gaining code execution, accessing sensitive AI model data, and tampering with inference results.

🟠

Likely Case

Information disclosure of AI model parameters or training data, potentially leading to model theft or data leakage.

🟢

If Mitigated

Limited impact if proper network segmentation and access controls prevent unauthorized local access to TRTLLM servers.

🌐 Internet-Facing: MEDIUM
🏢 Internal Only: HIGH

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Exploitation requires local access to the TRTLLM server and knowledge of the data validation flaw.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: Check NVIDIA advisory for specific patched versions

Vendor Advisory: https://nvidia.custhelp.com/app/answers/detail/a_id/5648

Restart Required: Yes

Instructions:

1. Review NVIDIA advisory at provided URL
2. Download and install the patched version of TensorRT-LLM
3. Restart TRTLLM services
4. Verify the update was successful

🔧 Temporary Workarounds

Restrict Local Access

linux

Implement strict access controls to limit who can access TRTLLM servers locally

# Use firewall rules to restrict access
sudo ufw deny from any to any port 8000
# Or use iptables: sudo iptables -A INPUT -p tcp --dport 8000 -j DROP

Network Segmentation

all

Isolate TRTLLM servers in separate network segments with strict access controls

🧯 If You Can't Patch

  • Implement strict network segmentation to isolate TRTLLM servers from untrusted networks
  • Deploy additional monitoring and logging for suspicious access to TRTLLM services

🔍 How to Verify

Check if Vulnerable:

Check TensorRT-LLM version and compare against NVIDIA's advisory for vulnerable versions

Check Version:

python -c "import tensorrt_llm; print(tensorrt_llm.__version__)"

Verify Fix Applied:

Verify installed TensorRT-LLM version matches or exceeds the patched version specified in NVIDIA advisory

📡 Detection & Monitoring

Log Indicators:

  • Unusual local access patterns to TRTLLM server
  • Failed authentication attempts to TRTLLM services
  • Unexpected process execution from TRTLLM context

Network Indicators:

  • Unusual data transfers from TRTLLM servers
  • Suspicious local network connections to TRTLLM ports

SIEM Query:

source="tensorrt-llm" AND (event_type="access_denied" OR event_type="unusual_process")

🔗 References

📤 Share & Export