CVE-2026-2069
📋 TL;DR
A stack-based buffer overflow vulnerability exists in llama.cpp's GBNF grammar handler. This allows local attackers to potentially execute arbitrary code or crash the application. Only users running llama.cpp with GBNF grammar functionality are affected.
💻 Affected Systems
- ggml-org llama.cpp
⚠️ Manual Verification Required
This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.
Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).
🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.
- Review the CVE details at NVD
- Check vendor security advisories for your specific version
- Test if the vulnerability is exploitable in your environment
- Consider updating to the latest version as a precaution
⚠️ Risk & Real-World Impact
Worst Case
Local privilege escalation leading to arbitrary code execution with the privileges of the llama.cpp process.
Likely Case
Application crash (denial of service) when processing malicious GBNF grammar input.
If Mitigated
Minimal impact if proper input validation and memory protections are in place.
🎯 Exploit Status
Exploit requires local access and ability to provide malicious GBNF grammar input to llama.cpp.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: Commit 18993 or later
Vendor Advisory: https://github.com/ggml-org/llama.cpp/pull/18993
Restart Required: Yes
Instructions:
1. Update llama.cpp to commit 18993 or later. 2. Rebuild the application. 3. Restart any running llama.cpp instances.
🔧 Temporary Workarounds
Disable GBNF Grammar Functionality
allDisable or avoid using GBNF grammar features in llama.cpp
Run with Memory Protections
linuxEnable ASLR and stack protection on the host system
sysctl -w kernel.randomize_va_space=2
🧯 If You Can't Patch
- Restrict local access to llama.cpp instances
- Monitor for abnormal process crashes or memory usage
🔍 How to Verify
Check if Vulnerable:
Check llama.cpp version: if commit hash is 55abc39 or earlier, system is vulnerable
Check Version:
git log --oneline -1
Verify Fix Applied:
Verify llama.cpp is built from commit 18993 or later
📡 Detection & Monitoring
Log Indicators:
- Segmentation fault errors
- Abnormal termination of llama.cpp processes
Network Indicators:
- None - local exploitation only
SIEM Query:
Process: llama.cpp AND (EventID: 1000 OR Signal: SIGSEGV)
🔗 References
- https://github.com/ggml-org/llama.cpp/
- https://github.com/ggml-org/llama.cpp/issues/18988
- https://github.com/ggml-org/llama.cpp/issues/18988#event-4426704865
- https://github.com/ggml-org/llama.cpp/pull/18993
- https://github.com/user-attachments/files/24761101/poc.zip
- https://vuldb.com/?ctiid.344636
- https://vuldb.com/?id.344636
- https://vuldb.com/?submit.745263