Llama.cpp is LLM inference in C/C++. There is a use of uninitialized heap variable vulnerability in gguf_init_from_file, the code will free this uninitialized variable later. In a simple POC, it will directly cause a crash. If the file is carefully constructed, it may be possible to control this uninitialized value and cause arbitrary address free problems. This may further lead to be exploited. Causes llama.cpp to crash (DoS) and may even lead to arbitrary code execution (RCE). This vulnerability has been patched in commit b2740.
References
Configurations
No configuration.
History
26 Apr 2024, 21:15
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2024-04-26 21:15
Updated : 2024-04-29 12:42
NVD link : CVE-2024-32878
Mitre link : CVE-2024-32878
CVE.ORG link : CVE-2024-32878
JSON object : View
Products Affected
No product.
CWE
CWE-456
Missing Initialization of a Variable