CVE-2026-2069

3.3 LOW

📋 TL;DR

A stack-based buffer overflow vulnerability exists in llama.cpp's GBNF grammar handler. This allows local attackers to potentially execute arbitrary code or crash the application. Only users running llama.cpp with GBNF grammar functionality are affected.

💻 Affected Systems

Products:
  • ggml-org llama.cpp
Versions: All versions up to commit 55abc39
Operating Systems: All platforms running llama.cpp
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects systems using llama.cpp with GBNF grammar functionality enabled.

⚠️ Manual Verification Required

This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.

Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).

🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.

Recommended Actions:
  1. Review the CVE details at NVD
  2. Check vendor security advisories for your specific version
  3. Test if the vulnerability is exploitable in your environment
  4. Consider updating to the latest version as a precaution

⚠️ Risk & Real-World Impact

🔴

Worst Case

Local privilege escalation leading to arbitrary code execution with the privileges of the llama.cpp process.

🟠

Likely Case

Application crash (denial of service) when processing malicious GBNF grammar input.

🟢

If Mitigated

Minimal impact if proper input validation and memory protections are in place.

🌐 Internet-Facing: LOW - Attack requires local access to the system.
🏢 Internal Only: MEDIUM - Local attackers could exploit this to compromise llama.cpp instances on shared systems.

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ✅ No
Complexity: LOW

Exploit requires local access and ability to provide malicious GBNF grammar input to llama.cpp.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: Commit 18993 or later

Vendor Advisory: https://github.com/ggml-org/llama.cpp/pull/18993

Restart Required: Yes

Instructions:

1. Update llama.cpp to commit 18993 or later. 2. Rebuild the application. 3. Restart any running llama.cpp instances.

🔧 Temporary Workarounds

Disable GBNF Grammar Functionality

all

Disable or avoid using GBNF grammar features in llama.cpp

Run with Memory Protections

linux

Enable ASLR and stack protection on the host system

sysctl -w kernel.randomize_va_space=2

🧯 If You Can't Patch

  • Restrict local access to llama.cpp instances
  • Monitor for abnormal process crashes or memory usage

🔍 How to Verify

Check if Vulnerable:

Check llama.cpp version: if commit hash is 55abc39 or earlier, system is vulnerable

Check Version:

git log --oneline -1

Verify Fix Applied:

Verify llama.cpp is built from commit 18993 or later

📡 Detection & Monitoring

Log Indicators:

  • Segmentation fault errors
  • Abnormal termination of llama.cpp processes

Network Indicators:

  • None - local exploitation only

SIEM Query:

Process: llama.cpp AND (EventID: 1000 OR Signal: SIGSEGV)

🔗 References

📤 Share & Export