llama.cpp provides LLM inference in C/C++. The unsafe `type` member in the `rpc_tensor` structure can cause `global-buffer-overflow`. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561.
References
Configurations
History
15 Aug 2024, 14:02
Type | Values Removed | Values Added |
---|---|---|
First Time |
Ggerganov llama.cpp
Ggerganov |
|
References | () https://github.com/ggerganov/llama.cpp/commit/b72942fac998672a79a1ae3c03b340f7e629980b - Patch | |
References | () https://github.com/ggerganov/llama.cpp/security/advisories/GHSA-mqp6-7pv6-fqjf - Vendor Advisory | |
Summary |
|
|
CPE | cpe:2.3:a:ggerganov:llama.cpp:*:*:*:*:*:*:*:* | |
CWE | CWE-401 | |
CVSS |
v2 : v3 : |
v2 : unknown
v3 : 7.5 |
12 Aug 2024, 15:15
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2024-08-12 15:15
Updated : 2024-08-15 14:02
NVD link : CVE-2024-42477
Mitre link : CVE-2024-42477
CVE.ORG link : CVE-2024-42477
JSON object : View
Products Affected
ggerganov
- llama.cpp