CVE-2025-23254
📋 TL;DR
This vulnerability in NVIDIA TensorRT-LLM allows attackers with local access to the TRTLLM server to exploit a data validation issue, potentially leading to remote code execution, information disclosure, or data tampering. It affects all platforms running vulnerable versions of TensorRT-LLM. The high CVSS score indicates significant security risk.
💻 Affected Systems
- NVIDIA TensorRT-LLM
⚠️ Manual Verification Required
This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.
Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).
🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.
- Review the CVE details at NVD
- Check vendor security advisories for your specific version
- Test if the vulnerability is exploitable in your environment
- Consider updating to the latest version as a precaution
⚠️ Risk & Real-World Impact
Worst Case
Full system compromise with attacker gaining code execution, accessing sensitive AI model data, and tampering with inference results.
Likely Case
Information disclosure of AI model parameters or training data, potentially leading to model theft or data leakage.
If Mitigated
Limited impact if proper network segmentation and access controls prevent unauthorized local access to TRTLLM servers.
🎯 Exploit Status
Exploitation requires local access to the TRTLLM server and knowledge of the data validation flaw.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Check NVIDIA advisory for specific patched versions
Vendor Advisory: https://nvidia.custhelp.com/app/answers/detail/a_id/5648
Restart Required: Yes
Instructions:
1. Review NVIDIA advisory at provided URL
2. Download and install the patched version of TensorRT-LLM
3. Restart TRTLLM services
4. Verify the update was successful
🔧 Temporary Workarounds
Restrict Local Access
linuxImplement strict access controls to limit who can access TRTLLM servers locally
# Use firewall rules to restrict access
sudo ufw deny from any to any port 8000
# Or use iptables: sudo iptables -A INPUT -p tcp --dport 8000 -j DROP
Network Segmentation
allIsolate TRTLLM servers in separate network segments with strict access controls
🧯 If You Can't Patch
- Implement strict network segmentation to isolate TRTLLM servers from untrusted networks
- Deploy additional monitoring and logging for suspicious access to TRTLLM services
🔍 How to Verify
Check if Vulnerable:
Check TensorRT-LLM version and compare against NVIDIA's advisory for vulnerable versions
Check Version:
python -c "import tensorrt_llm; print(tensorrt_llm.__version__)"
Verify Fix Applied:
Verify installed TensorRT-LLM version matches or exceeds the patched version specified in NVIDIA advisory
📡 Detection & Monitoring
Log Indicators:
- Unusual local access patterns to TRTLLM server
- Failed authentication attempts to TRTLLM services
- Unexpected process execution from TRTLLM context
Network Indicators:
- Unusual data transfers from TRTLLM servers
- Suspicious local network connections to TRTLLM ports
SIEM Query:
source="tensorrt-llm" AND (event_type="access_denied" OR event_type="unusual_process")