CVE-2024-42488

6.8 MEDIUM

📋 TL;DR

A race condition in Cilium agent versions before 1.14.14 and 1.15.8 can cause node labels to be ignored, potentially allowing CiliumClusterwideNetworkPolicies to be bypassed. This affects users running vulnerable Cilium versions in Kubernetes environments where network policies rely on node labels for enforcement.

💻 Affected Systems

Products:
  • Cilium
Versions: All versions before 1.14.14 and 1.15.8
Operating Systems: Linux
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects deployments using CiliumClusterwideNetworkPolicies with node label selectors. The vulnerability requires specific timing conditions to trigger.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Attackers could bypass critical network security policies, potentially gaining unauthorized access to sensitive workloads or performing lateral movement within the cluster.

🟠

Likely Case

Intermittent policy enforcement failures that could allow unintended network traffic between pods or nodes, depending on timing and specific policy configurations.

🟢

If Mitigated

With proper network segmentation and defense-in-depth controls, the impact would be limited to specific policy bypass rather than full cluster compromise.

🌐 Internet-Facing: MEDIUM
🏢 Internal Only: HIGH

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: HIGH

Exploitation requires precise timing to trigger the race condition and depends on specific policy configurations. No public exploit code has been identified.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 1.14.14 and 1.15.8

Vendor Advisory: https://github.com/cilium/cilium/security/advisories/GHSA-q7w8-72mr-vpgw

Restart Required: Yes

Instructions:

1. Upgrade Cilium to version 1.14.14 or 1.15.8. 2. Restart Cilium agent on all nodes. 3. Verify network policies are correctly applied.

🔧 Temporary Workarounds

Restart Cilium Agent

linux

Restarting the Cilium agent can temporarily resolve the race condition until affected policies are confirmed working

kubectl rollout restart daemonset/cilium -n kube-system

🧯 If You Can't Patch

  • Implement additional network segmentation controls independent of Cilium policies
  • Monitor for policy bypass attempts and implement compensating detective controls

🔍 How to Verify

Check if Vulnerable:

Check Cilium version: kubectl get daemonset cilium -n kube-system -o jsonpath='{.spec.template.spec.containers[0].image}'

Check Version:

kubectl get daemonset cilium -n kube-system -o jsonpath='{.spec.template.spec.containers[0].image}' | grep -o 'v[0-9.]*'

Verify Fix Applied:

After upgrade, verify CiliumClusterwideNetworkPolicies with node label selectors are correctly applied to intended nodes

📡 Detection & Monitoring

Log Indicators:

  • Cilium agent logs showing label synchronization issues
  • Unexpected policy evaluation failures in Cilium logs

Network Indicators:

  • Network traffic flows that should be blocked by policies but are observed
  • Changes in allowed connections between pods/nodes

SIEM Query:

source="cilium" AND ("label" OR "policy" OR "bypass")

🔗 References

📤 Share & Export