CVE-2024-4343

9.8 CRITICAL

📋 TL;DR

This CVE describes a critical command injection vulnerability in PrivateGPT's SageMaker integration that allows remote code execution. Attackers can manipulate responses from AWS SageMaker LLM endpoints to execute arbitrary Python code on the hosting system. Users running PrivateGPT versions up to 0.3.0 with SageMaker LLM integration are affected.

💻 Affected Systems

Products:
  • PrivateGPT
Versions: up to and including 0.3.0
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects deployments using the SageMaker LLM integration; local LLM deployments are not vulnerable.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Full system compromise allowing attackers to execute arbitrary commands, steal data, install malware, or pivot to other systems in the network.

🟠

Likely Case

Remote code execution leading to data exfiltration, cryptocurrency mining, or ransomware deployment on vulnerable instances.

🟢

If Mitigated

Limited impact if proper network segmentation and endpoint protection are in place, though code execution would still be possible.

🌐 Internet-Facing: HIGH
🏢 Internal Only: HIGH

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires control over SageMaker endpoint responses, which could be achieved through compromised AWS credentials or malicious model outputs.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.6.0

Vendor Advisory: https://github.com/imartinez/privategpt/commit/86368c61760c9cee5d977131d23ad2a3e063cbe9

Restart Required: Yes

Instructions:

1. Update PrivateGPT to version 0.6.0 or later. 2. Replace the vulnerable sagemaker.py file with the patched version. 3. Restart the PrivateGPT service.

🔧 Temporary Workarounds

Disable SageMaker Integration

all

Temporarily disable the vulnerable SageMaker LLM component until patching is possible.

Modify configuration to use local LLM instead of SageMaker
Comment out or remove SageMaker LLM initialization in code

🧯 If You Can't Patch

  • Implement strict network controls to limit SageMaker endpoint access to trusted sources only
  • Deploy application-level firewalls or WAF rules to monitor and block suspicious responses from LLM endpoints

🔍 How to Verify

Check if Vulnerable:

Check if using PrivateGPT version ≤0.3.0 and review sagemaker.py for use of eval() function in complete() method.

Check Version:

Check package version with: pip show privategpt or review project version file

Verify Fix Applied:

Verify PrivateGPT version is ≥0.6.0 and confirm sagemaker.py uses json.loads() instead of eval() for response parsing.

📡 Detection & Monitoring

Log Indicators:

  • Unusual Python execution errors
  • Suspicious system commands in application logs
  • Unexpected process spawns from PrivateGPT

Network Indicators:

  • Unusual outbound connections from PrivateGPT host
  • Suspicious payloads in SageMaker endpoint responses

SIEM Query:

source="privategpt" AND (process_execution OR command_injection OR eval_function)

🔗 References

📤 Share & Export