CVE-2024-10954

8.8 HIGH

📋 TL;DR

This vulnerability allows remote code execution on servers running vulnerable versions of the gpt_academic manim plugin. Attackers can inject malicious code through LLM prompts, which gets executed without proper sandboxing. Anyone using gpt_academic with the manim plugin before the fix is affected.

💻 Affected Systems

Products:
  • binary-husky/gpt_academic with manim plugin
Versions: All versions prior to the fix
Operating Systems: All platforms running Python
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects installations with the manim plugin enabled and using LLM-generated code execution.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Full compromise of the backend server, allowing attackers to execute arbitrary commands, steal data, install malware, or pivot to other systems.

🟠

Likely Case

Attacker gains shell access to the application server, potentially accessing sensitive data and disrupting services.

🟢

If Mitigated

With proper input validation and sandboxing, the risk is reduced to minimal, though prompt injection attempts may still occur.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Requires ability to submit prompts to the LLM interface and knowledge of prompt injection techniques.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: Latest version after the fix

Vendor Advisory: https://huntr.com/bounties/72d034e3-6ca2-495d-98a7-ac9565588c09

Restart Required: No

Instructions:

1. Update gpt_academic to the latest version. 2. Ensure the manim plugin is updated. 3. Verify sandboxing is properly implemented.

🔧 Temporary Workarounds

Disable manim plugin

all

Temporarily disable the vulnerable manim plugin to prevent exploitation.

Remove or comment out manim plugin loading in configuration

Implement input validation

all

Add strict input validation to reject suspicious prompts containing code execution patterns.

🧯 If You Can't Patch

  • Implement network segmentation to isolate the vulnerable system
  • Deploy application-level firewall with WAF rules to detect prompt injection attempts

🔍 How to Verify

Check if Vulnerable:

Check if running a version before the fix and if manim plugin executes LLM-generated code without sandboxing.

Check Version:

Check gpt_academic version in configuration or package manager

Verify Fix Applied:

Verify the latest version is installed and test that malicious prompts no longer execute arbitrary code.

📡 Detection & Monitoring

Log Indicators:

  • Unusual command execution patterns
  • Suspicious prompt submissions
  • Error logs from sandbox violations

Network Indicators:

  • Unexpected outbound connections from the application server
  • Unusual traffic patterns to/from LLM interface

SIEM Query:

Search for 'manim plugin execution' or 'LLM code execution' events with suspicious patterns

🔗 References

📤 Share & Export