With the following crawler configuration:
```python
from bs4 import BeautifulSoup as Soup
url = "https://example.com"
loader = RecursiveUrlLoader(
url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text
)
docs = loader.load()
```
An attacker in control of the contents of `https://example.com` could place a malicious HTML file in there with links like "https://example.completely.different/my_file.html" and the crawler would proceed to download that file as well even though `prevent_outside=True`.
https://github.com/langchain-ai/langchain/blob/bf0b3cc0b5ade1fb95a5b1b6fa260e99064c2e22/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L51-L51
Resolved in https://github.com/langchain-ai/langchain/pull/15559
References
Configurations
No configuration.
History
13 Mar 2024, 21:15
Type | Values Removed | Values Added |
---|---|---|
References |
|
26 Feb 2024, 16:32
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2024-02-26 16:27
Updated : 2024-03-13 21:15
NVD link : CVE-2024-0243
Mitre link : CVE-2024-0243
CVE.ORG link : CVE-2024-0243
JSON object : View
Products Affected
No product.
CWE
CWE-918
Server-Side Request Forgery (SSRF)