LangChain Core patches path traversal file read in load_prompt
TL;DR — A path traversal in legacy LangChain Core prompt-loading APIs can let remote attackers read arbitrary host files when applications accept user-influenced prompt configs.
What happened
LangChain is a popular framework for building agentic and LLM-powered applications; langchain-core provides shared primitives used by the broader ecosystem.
CVE-2026-34070 describes a High-severity path traversal where multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized configuration dictionaries without validating against directory traversal (..) or absolute-path injection. If an application passes user-influenced prompt configuration to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained by file-extension checks (e.g., .txt, .json, .yaml).
This is a high-risk pattern for platform teams because AI app stacks frequently treat “prompt configuration” as data (often loaded from external sources) while the runtime environment commonly holds file-based secrets and configuration that become attractive disclosure targets.
Who is impacted
- Projects using
langchain-coreversions< 1.2.22. - Applications that pass untrusted or user-influenced prompt configuration dictionaries into
load_prompt()orload_prompt_from_config().
| Component | Affected versions (per CVE record) | Patched version (per CVE record) |
|---|---|---|
langchain-core | < 1.2.22 | 1.2.22 |
What to do now
- Follow vendor remediation guidance and apply the patched release.
-
"This issue has been patched in version 1.2.22."
-
- Inventory usage paths: find where your code (or wrappers/SDKs) call
load_prompt()/load_prompt_from_config()and identify whether inputs can be influenced by users, tenants, or external content. - Reduce exposure while rolling out fixes:
- Treat prompt configs as untrusted inputs and avoid passing user-controlled paths into prompt-loading utilities.
- Harden runtime file exposure (e.g., avoid mounting sensitive files into the container/host path visible to the service when not required).
- If you suspect exposure, review request/application logs for unexpected prompt config usage patterns and assess what filesystem content could have been readable in the service context.
Additional Information
- The CVE record references the upstream advisory and patch context:
GHSA-qh6h-p6c9-ff54(GitHub Security Advisory forlangchain-ai/langchain).
Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.
