📦 Triton Inference Server
by Nvidia
🔍 What is Triton Inference Server?
Description coming soon...
🛡️ Security Overview
Click on a severity to filter vulnerabilities
⚠️ Known Vulnerabilities
NVIDIA Triton Inference Server's HTTP server has a heap-based buffer overflow vulnerability (CWE-122) that allows attackers to execute arbitrary code via specially crafted HTTP requests. This affects ...
CVE-2025-23310 is a critical stack buffer overflow vulnerability in NVIDIA Triton Inference Server that allows attackers to execute arbitrary code remotely by sending specially crafted inputs. This af...
CVE-2024-0095 is a log injection vulnerability in NVIDIA Triton Inference Server that allows attackers to inject forged logs and executable commands by manipulating log entries. This could lead to rem...
This vulnerability in NVIDIA Triton Inference Server allows attackers to set the logging location to arbitrary files, enabling log injection attacks. If exploited, it could lead to code execution, pri...
NVIDIA Triton Server for Linux has an input validation vulnerability where attackers can trigger improper quantity validation, potentially causing denial of service. This affects organizations running...
NVIDIA Triton Inference Server has a vulnerability where sending excessively large payloads can trigger improper condition checking, potentially causing denial of service. This affects organizations u...
NVIDIA Triton Inference Server contains an integer overflow vulnerability where sending an invalid request can cause a segmentation fault and crash the service. This affects all users running vulnerab...
NVIDIA Triton Inference Server contains a vulnerability where specially crafted inputs can trigger uncontrolled recursion, potentially causing denial of service. This affects both Windows and Linux de...
NVIDIA Triton Inference Server contains an integer overflow vulnerability (CWE-190) where specially crafted inputs could cause denial of service or data tampering. This affects both Windows and Linux ...
NVIDIA Triton Inference Server's Python backend has a buffer overflow vulnerability where specially crafted requests can trigger out-of-bounds writes. This could allow attackers to execute arbitrary c...
NVIDIA Triton Inference Server contains a divide-by-zero vulnerability in request processing that could cause denial of service. Attackers can exploit this by sending specially crafted invalid request...
NVIDIA Triton Inference Server has a path traversal vulnerability when launched with the --model-control explicit option. Attackers can exploit this via the model load API to access files outside inte...
NVIDIA Triton Inference Server contains a stack overflow vulnerability where attackers can send extra-large payloads to cause denial of service. This affects all deployments of Triton Inference Server...
CVE-2025-23333 is an out-of-bounds read vulnerability in NVIDIA Triton Inference Server's Python backend that allows attackers to read memory beyond allocated bounds by manipulating shared memory data...
CVE-2025-23334 is an out-of-bounds read vulnerability in NVIDIA Triton Inference Server's Python backend that could allow information disclosure. Attackers can exploit this by sending specially crafte...
NVIDIA Triton Inference Server contains an integer underflow vulnerability in its TensorRT backend that could allow attackers to cause denial of service. The vulnerability affects both Windows and Lin...
NVIDIA Triton Inference Server has an out-of-bounds read vulnerability where users can release shared memory regions while they're in use. This could allow attackers to cause denial of service by cras...
NVIDIA Triton Inference Server for Linux has a vulnerability where improper resource initialization during network operations can lead to information disclosure. This affects users running vulnerable ...