llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
References
Configurations
No configuration.
History
21 Nov 2024, 09:18
Type | Values Removed | Values Added |
---|---|---|
Summary |
|
|
References | () https://github.com/abetlen/llama-cpp-python/commit/b454f40a9a1787b2b5659cd2cb00819d983185df - | |
References | () https://github.com/abetlen/llama-cpp-python/security/advisories/GHSA-56xg-wfcc-g829 - |
14 May 2024, 15:38
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2024-05-14 15:38
Updated : 2024-11-21 09:18
NVD link : CVE-2024-34359
Mitre link : CVE-2024-34359
CVE.ORG link : CVE-2024-34359
JSON object : View
Products Affected
No product.
CWE
CWE-76
Improper Neutralization of Equivalent Special Elements