CVE-2025-13708
📋 TL;DR
This vulnerability allows remote attackers to execute arbitrary code as root on systems running Tencent NeuralNLP-NeuralClassifier. Attackers can exploit it by tricking users into visiting malicious web pages or opening malicious files. The vulnerability affects installations where this machine learning framework is deployed and accessible.
💻 Affected Systems
- Tencent NeuralNLP-NeuralClassifier
⚠️ Manual Verification Required
This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.
Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).
🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.
- Review the CVE details at NVD
- Check vendor security advisories for your specific version
- Test if the vulnerability is exploitable in your environment
- Consider updating to the latest version as a precaution
⚠️ Risk & Real-World Impact
Worst Case
Complete system compromise with root-level code execution, data theft, lateral movement, and persistent backdoor installation.
Likely Case
Remote code execution leading to data exfiltration, cryptocurrency mining, or ransomware deployment on vulnerable systems.
If Mitigated
No impact if proper input validation and deserialization controls are implemented.
🎯 Exploit Status
Requires user interaction (visiting malicious page or opening malicious file). ZDI-CAN-27184 tracking number indicates professional research.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Commit 8dea5ffdb45cf0a33b3d116de38507afaee87594 and later
Vendor Advisory: https://github.com/Tencent/NeuralNLP-NeuralClassifier/commit/8dea5ffdb45cf0a33b3d116de38507afaee87594
Restart Required: No
Instructions:
1. Update to latest version from GitHub repository. 2. Replace vulnerable _load_checkpoint function with patched version. 3. Validate all checkpoint files are from trusted sources.
🔧 Temporary Workarounds
Restrict checkpoint file sources
allOnly allow loading checkpoint files from trusted, authenticated sources
Implement input validation
allAdd validation to reject untrusted serialized data in _load_checkpoint
🧯 If You Can't Patch
- Network segmentation to isolate NeuralNLP-NeuralClassifier instances
- Implement strict file integrity monitoring for checkpoint files
🔍 How to Verify
Check if Vulnerable:
Check if NeuralNLP-NeuralClassifier version predates commit 8dea5ffdb45cf0a33b3d116de38507afaee87594
Check Version:
git log --oneline | head -20
Verify Fix Applied:
Verify the _load_checkpoint function includes proper input validation and safe deserialization
📡 Detection & Monitoring
Log Indicators:
- Unusual process execution from NeuralNLP context
- Abnormal checkpoint file loading patterns
- Python deserialization errors
Network Indicators:
- Unexpected outbound connections from NeuralNLP systems
- Checkpoint file downloads from untrusted sources
SIEM Query:
process_name:"python" AND cmdline:"NeuralNLP" AND (cmdline:"pickle" OR cmdline:"deserialize")