
vLLM patches OOM DoS via unbounded n parameter
A GitHub-reviewed advisory warns vLLM’s OpenAI-compatible API server can be OOM-crashed by a single request with an extremely large `n`, affecting `vllm` < `0.19.0`.
NewsAI SecurityPython
1 min03 Apr 2026
