Critical vLLM RCE Vulnerability Lets Attackers Take Over Servers via Video Links

1 min read

A critical remote code execution vulnerability has been discovered in vLLM, one of the most popular frameworks for serving large language models locally. The vulnerability, tracked as CVE-2026-22778, allows attackers to gain complete control over affected servers simply by getting them to process a malicious video link through the inference API.

This vulnerability is particularly concerning because vLLM is widely deployed in production environments for local LLM serving, and many installations may be running with default configurations that expose them to this attack vector. The issue affects the video processing capabilities that were recently added to support multimodal models.

Local LLM practitioners should immediately update their vLLM installations and review their network security configurations. Those running vLLM in production should also audit their API access controls and consider implementing additional network isolation measures. The full security advisory provides detailed mitigation steps and affected version information.


Source: OX Security · Relevance: 10/10