CVE-2025-53944

7.7 HIGH

📋 TL;DR

AutoGPT versions 0.6.15 and below have an authorization bypass vulnerability in the external API's get_graph_execution_results endpoint. Authenticated users can access any execution results by providing arbitrary execution IDs, potentially exposing sensitive AI agent data. This affects all deployments using vulnerable versions of AutoGPT.

💻 Affected Systems

Products:
  • AutoGPT
Versions: v0.6.15 and below
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects the external API endpoint; internal API has proper validation. Requires authenticated access.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Attackers could access sensitive execution results from all AI agents, potentially exposing proprietary AI workflows, confidential data processed by agents, and internal system information.

🟠

Likely Case

Authenticated users accessing execution results they shouldn't have permission to view, leading to data leakage and potential privacy violations.

🟢

If Mitigated

Limited to authenticated users only, with proper access controls preventing unauthorized data access.

🌐 Internet-Facing: HIGH - External API endpoints are directly accessible if exposed to the internet, allowing remote exploitation.
🏢 Internal Only: MEDIUM - Internal attackers or compromised accounts could exploit this vulnerability to access unauthorized data.

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: LOW

Exploitation requires authenticated access and knowledge of the API endpoint structure. Simple parameter manipulation to bypass authorization.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: v0.6.16

Vendor Advisory: https://github.com/Significant-Gravitas/AutoGPT/security/advisories/GHSA-x77j-qg2x-fgg6

Restart Required: Yes

Instructions:

1. Update AutoGPT to version 0.6.16 or later. 2. Restart the AutoGPT service. 3. Verify the fix by testing the get_graph_execution_results endpoint with unauthorized execution IDs.

🔧 Temporary Workarounds

Disable External API Endpoint

all

Temporarily disable or restrict access to the vulnerable external API endpoint until patching is complete.

# Configure firewall rules to block access to the external API endpoint
# Modify AutoGPT configuration to disable external API if not needed

Implement API Gateway Authorization

all

Add additional authorization layer at API gateway level to validate both graph_id and graph_exec_id parameters.

# Configure API gateway (e.g., Kong, NGINX) to validate execution ownership
# Implement custom middleware for parameter validation

🧯 If You Can't Patch

  • Implement strict network segmentation to isolate AutoGPT instances and limit access to trusted users only.
  • Enable detailed audit logging for all API calls to the get_graph_execution_results endpoint and monitor for unauthorized access patterns.

🔍 How to Verify

Check if Vulnerable:

Test the external API endpoint with authenticated credentials: attempt to access execution results using a valid graph_id but unauthorized graph_exec_id. If successful, the system is vulnerable.

Check Version:

Check AutoGPT version in configuration files or via API endpoint if available. For containerized deployments: docker inspect <container> | grep version

Verify Fix Applied:

After updating to v0.6.16, repeat the vulnerability test. Access should be denied when using unauthorized graph_exec_id parameters.

📡 Detection & Monitoring

Log Indicators:

  • Multiple failed authorization attempts followed by successful access to execution results
  • Access patterns showing users querying execution IDs outside their normal range
  • API logs showing successful get_graph_execution_results calls with mismatched graph_id/graph_exec_id ownership

Network Indicators:

  • Unusual API call patterns to the external API endpoint
  • High volume of requests to get_graph_execution_results from single users

SIEM Query:

source="autogpt_logs" AND endpoint="/api/external/get_graph_execution_results" AND response_code=200 | stats count by user_id, graph_exec_id | where count > threshold

🔗 References

📤 Share & Export