CVE-2024-11039
📋 TL;DR
This vulnerability allows remote attackers to execute arbitrary commands on systems running vulnerable versions of binary-husky/gpt_academic by exploiting insecure pickle deserialization. Attackers can achieve this by uploading a malicious compressed package containing specially crafted pickle files. Users of gpt_academic versions up to 3.83 are affected.
💻 Affected Systems
- binary-husky/gpt_academic
📦 What is this software?
Gpt Academic by Binary Husky
⚠️ Risk & Real-World Impact
Worst Case
Full system compromise with attacker gaining complete control over the server, allowing data theft, lateral movement, and persistent backdoor installation.
Likely Case
Remote code execution leading to data exfiltration, cryptocurrency mining, or ransomware deployment on vulnerable instances.
If Mitigated
Limited impact with proper network segmentation and minimal privileges, potentially only affecting the application service account.
🎯 Exploit Status
Exploitation requires constructing malicious pickle files with numpy payloads, which requires specific knowledge but no authentication.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Commit 91f5e6b and later
Vendor Advisory: https://github.com/binary-husky/gpt_academic/commit/91f5e6b8f754beb47b02f7c1893804c1c9543ccb
Restart Required: No
Instructions:
1. Update to the latest version from GitHub. 2. Apply commit 91f5e6b if using older version. 3. Remove numpy from pickle deserialization whitelist in the code.
🔧 Temporary Workarounds
Disable LaTeX error correction feature
allTemporarily disable the vulnerable plugin function until patching is complete.
Modify configuration to disable the LaTeX English error correction plugin
Restrict file uploads
allBlock upload of compressed packages containing .pkl files.
Implement file upload filtering to reject .zip/.tar files containing .pkl extensions
🧯 If You Can't Patch
- Network segmentation to isolate gpt_academic instances from critical systems
- Implement strict file upload validation and sandboxing for LaTeX processing
🔍 How to Verify
Check if Vulnerable:
Check if running gpt_academic version ≤ 3.83 and if numpy is in pickle deserialization whitelist in the code.
Check Version:
Check the version in the application configuration or repository commit history
Verify Fix Applied:
Verify commit 91f5e6b is applied and numpy is removed from pickle deserialization whitelist.
📡 Detection & Monitoring
Log Indicators:
- Unusual file uploads containing .pkl files
- Unexpected process execution from gpt_academic context
- Errors in LaTeX processing logs
Network Indicators:
- Outbound connections from gpt_academic to unknown IPs
- Unusual data exfiltration patterns
SIEM Query:
Process execution from gpt_academic with suspicious command-line arguments OR file upload containing .pkl extension