CVE-2025-53944
📋 TL;DR
AutoGPT versions 0.6.15 and below have an authorization bypass vulnerability in the external API's get_graph_execution_results endpoint. Authenticated users can access any execution results by providing arbitrary execution IDs, potentially exposing sensitive AI agent data. This affects all deployments using vulnerable versions of AutoGPT.
💻 Affected Systems
- AutoGPT
📦 What is this software?
⚠️ Risk & Real-World Impact
Worst Case
Attackers could access sensitive execution results from all AI agents, potentially exposing proprietary AI workflows, confidential data processed by agents, and internal system information.
Likely Case
Authenticated users accessing execution results they shouldn't have permission to view, leading to data leakage and potential privacy violations.
If Mitigated
Limited to authenticated users only, with proper access controls preventing unauthorized data access.
🎯 Exploit Status
Exploitation requires authenticated access and knowledge of the API endpoint structure. Simple parameter manipulation to bypass authorization.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: v0.6.16
Vendor Advisory: https://github.com/Significant-Gravitas/AutoGPT/security/advisories/GHSA-x77j-qg2x-fgg6
Restart Required: Yes
Instructions:
1. Update AutoGPT to version 0.6.16 or later. 2. Restart the AutoGPT service. 3. Verify the fix by testing the get_graph_execution_results endpoint with unauthorized execution IDs.
🔧 Temporary Workarounds
Disable External API Endpoint
allTemporarily disable or restrict access to the vulnerable external API endpoint until patching is complete.
# Configure firewall rules to block access to the external API endpoint
# Modify AutoGPT configuration to disable external API if not needed
Implement API Gateway Authorization
allAdd additional authorization layer at API gateway level to validate both graph_id and graph_exec_id parameters.
# Configure API gateway (e.g., Kong, NGINX) to validate execution ownership
# Implement custom middleware for parameter validation
🧯 If You Can't Patch
- Implement strict network segmentation to isolate AutoGPT instances and limit access to trusted users only.
- Enable detailed audit logging for all API calls to the get_graph_execution_results endpoint and monitor for unauthorized access patterns.
🔍 How to Verify
Check if Vulnerable:
Test the external API endpoint with authenticated credentials: attempt to access execution results using a valid graph_id but unauthorized graph_exec_id. If successful, the system is vulnerable.
Check Version:
Check AutoGPT version in configuration files or via API endpoint if available. For containerized deployments: docker inspect <container> | grep version
Verify Fix Applied:
After updating to v0.6.16, repeat the vulnerability test. Access should be denied when using unauthorized graph_exec_id parameters.
📡 Detection & Monitoring
Log Indicators:
- Multiple failed authorization attempts followed by successful access to execution results
- Access patterns showing users querying execution IDs outside their normal range
- API logs showing successful get_graph_execution_results calls with mismatched graph_id/graph_exec_id ownership
Network Indicators:
- Unusual API call patterns to the external API endpoint
- High volume of requests to get_graph_execution_results from single users
SIEM Query:
source="autogpt_logs" AND endpoint="/api/external/get_graph_execution_results" AND response_code=200 | stats count by user_id, graph_exec_id | where count > threshold
🔗 References
- https://github.com/Significant-Gravitas/AutoGPT/commit/309114a727baa2063357810d444e9a119f8dd7f6
- https://github.com/Significant-Gravitas/AutoGPT/releases/tag/autogpt-platform-beta-v0.6.16
- https://github.com/Significant-Gravitas/AutoGPT/security/advisories/GHSA-x77j-qg2x-fgg6
- https://github.com/Significant-Gravitas/AutoGPT/security/advisories/GHSA-x77j-qg2x-fgg6