JustAppSec
CRITICAL SeverityCVSS 3.19.8CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

CVE-2026-34159

Last updated Apr 01, 2026 · Published Apr 01, 2026

← Back to list

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.

Affected products

1 listed
  • ggml-org:llama.cpp

Mappings

CWE

CWE-119

CAPEC

None listed.


CVE® content © MITRE Corporation. Licensed under the CVE Terms of Use. Terms

Need help?Get in touch.