CVE-2025-27780
📋 TL;DR
This vulnerability allows remote attackers to execute arbitrary code on Applio voice conversion tool servers by exploiting unsafe deserialization in the model loading process. Attackers can achieve remote code execution by providing malicious model files. All users running vulnerable versions are affected.
💻 Affected Systems
- Applio
📦 What is this software?
Applio by Applio
⚠️ Risk & Real-World Impact
Worst Case
Full system compromise with attacker gaining complete control over the server, allowing data theft, lateral movement, and persistent backdoor installation.
Likely Case
Remote code execution leading to application compromise, data exfiltration, and potential ransomware deployment.
If Mitigated
Limited impact with proper network segmentation and minimal privileges, potentially only affecting the application service.
🎯 Exploit Status
The vulnerability is straightforward to exploit with knowledge of PyTorch deserialization attacks. No authentication is required.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Main branch after commit 11d139508d615a6db4d48b76634a443c66170dda
Vendor Advisory: https://securitylab.github.com/advisories/GHSL-2024-341_GHSL-2024-353_Applio/
Restart Required: Yes
Instructions:
1. Update to the latest main branch version. 2. Replace vulnerable model_information.py files. 3. Restart the Applio service. 4. Verify the fix by checking the updated code references.
🔧 Temporary Workarounds
Disable model information functionality
allTemporarily disable the vulnerable model loading feature until patching is complete.
# Comment out or remove calls to model_information function
# Disable web interface endpoints that trigger model loading
Network isolation
linuxRestrict network access to Applio instances to trusted sources only.
# Firewall rule example: iptables -A INPUT -p tcp --dport [Applio-port] -s [trusted-ip] -j ACCEPT
# iptables -A INPUT -p tcp --dport [Applio-port] -j DROP
🧯 If You Can't Patch
- Isolate Applio instances in a dedicated network segment with strict firewall rules
- Implement application-level input validation and sanitization for model file paths
🔍 How to Verify
Check if Vulnerable:
Check if your version is 3.2.8-bugfix or earlier. Examine model_information.py for unsafe torch.load usage with user input.
Check Version:
Check Applio version in application interface or configuration files
Verify Fix Applied:
Verify that model_information.py no longer passes user-controlled input directly to torch.load. Check for input validation or safe loading mechanisms.
📡 Detection & Monitoring
Log Indicators:
- Unusual model file loading patterns
- Errors from torch.load with unexpected file types
- Process execution from Applio context
Network Indicators:
- Unexpected outbound connections from Applio server
- Large file uploads to model endpoints
SIEM Query:
source="applio" AND (event="model_load" OR event="torch.load") AND file_path CONTAINS suspicious_pattern
🔗 References
- https://github.com/IAHispano/Applio/blob/29b4a00e4be209f9aac51cd9ccffcc632dfb2973/rvc/train/process/model_information.py#L16
- https://github.com/IAHispano/Applio/blob/29b4a00e4be209f9aac51cd9ccffcc632dfb2973/tabs/extra/model_information.py#L11-L16
- https://github.com/IAHispano/Applio/commit/11d139508d615a6db4d48b76634a443c66170dda
- https://securitylab.github.com/advisories/GHSL-2024-341_GHSL-2024-353_Applio/