CVE-2023-29374

In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
Configurations

Configuration 1 (hide)

cpe:2.3:a:langchain:langchain:*:*:*:*:*:*:*:*

History

21 Nov 2024, 07:56

Type Values Removed Values Added
References () https://github.com/hwchase17/langchain/issues/1026 - Issue Tracking () https://github.com/hwchase17/langchain/issues/1026 - Issue Tracking
References () https://github.com/hwchase17/langchain/issues/814 - Exploit, Issue Tracking, Patch () https://github.com/hwchase17/langchain/issues/814 - Exploit, Issue Tracking, Patch
References () https://github.com/hwchase17/langchain/pull/1119 - Patch () https://github.com/hwchase17/langchain/pull/1119 - Patch
References () https://twitter.com/rharang/status/1641899743608463365/photo/1 - Exploit () https://twitter.com/rharang/status/1641899743608463365/photo/1 - Exploit

Information

Published : 2023-04-05 02:15

Updated : 2024-11-21 07:56


NVD link : CVE-2023-29374

Mitre link : CVE-2023-29374

CVE.ORG link : CVE-2023-29374


JSON object : View

Products Affected

langchain

  • langchain
CWE
CWE-74

Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')