Back to Articles
critical

CRITICAL: LangChain and LangGraph Vulnerabilities: Your AI Stack Just Became Your Biggest Liability

Three critical vulnerabilities in LangChain and LangGraph let attackers steal files from filesystems, siphon API keys and environment secrets, and pillage conversation histories. With 84 million weekly downloads, most enterprise AI deployments are affected.

By Danny Mercer, CISSP — Lead Security Analyst Mar 28, 2026 1 views
Share:

If your organization has jumped on the AI bandwagon in the past year, congratulations. You might also have three new holes in your security perimeter that you did not know about.

Cybersecurity researchers at Cyera dropped a bombshell this week, disclosing three critical vulnerabilities in LangChain and LangGraph that could let attackers steal files from your filesystem, siphon environment secrets like API keys, and pillage entire conversation histories from your AI workflows. These are not theoretical proof-of-concept issues gathering dust on some academic paper. These are exploitable flaws in frameworks that collectively see over 84 million downloads per week on PyPI alone.

Let that sink in for a moment. LangChain has become the de facto standard for building applications powered by large language models. Whether you are building a customer service chatbot, an internal knowledge assistant, or an automated document analysis pipeline, chances are LangChain is somewhere in your stack. LangGraph extends it for more sophisticated agentic workflows, enabling AI systems that can reason, plan, and execute multi-step tasks. Together, they form the backbone of countless enterprise AI deployments. And until this week, they came with three independent paths for attackers to drain your most sensitive data.

The path traversal flaw, tracked as CVE-2026-34070 with a CVSS score of 7.5, lives in LangChain's prompt-loading mechanism. Specifically, it resides in the langchain_core/prompts/loading.py file. An attacker can supply a specially crafted prompt template that tricks the application into fetching arbitrary files from the filesystem without any validation.

What makes this particularly dangerous is how innocuous it seems. Prompt templates are a core feature of LangChain, designed to let developers create reusable patterns for interacting with language models. Few developers would think twice about loading a prompt template. But if that template comes from untrusted input, or if an attacker can influence the template loading process, suddenly your Docker configurations, SSH keys, application configs, and anything else your AI application can read becomes fair game.

Organizations running LangChain applications in containerized environments should be especially concerned. Container deployments often include sensitive configuration files, environment variable dumps, and credential stores that were never intended to be accessible to end users. A single path traversal exploit could expose your entire deployment architecture.

CVE-2025-68664 carries a critical CVSS score of 9.3 and represents a deserialization vulnerability that security researchers have nicknamed LangGrinch. Details of this flaw were actually first shared by Cyata back in December 2025, but many organizations remain unpatched.

The flaw allows attackers to pass malicious input that the application mistakenly interprets as an already-serialized LangChain object rather than regular user data. This is a classic deserialization attack pattern, but applied to the AI context it becomes particularly devastating. The result is that API keys and environment secrets get leaked through prompt injection attacks.

Think about what your LangChain deployment has access to. Your OpenAI API key, which probably costs real money to abuse. Your database credentials, which grant access to production data. Your cloud provider tokens, which could be used to spin up crypto miners or access sensitive storage buckets. All of them become accessible to anyone who knows how to craft the right payload.

The SQL injection flaw CVE-2025-67644 scores 7.3 on the CVSS scale and targets LangGraph's SQLite checkpoint implementation. By manipulating metadata filter keys, attackers can inject arbitrary SQL queries into the database.

LangGraph uses checkpoints to save and restore conversation state, enabling sophisticated multi-turn interactions and agentic behaviors. But if those checkpoints are stored in SQLite and the checkpoint queries are injectable, an attacker gains access to conversation histories. In enterprise deployments, those conversations often contain sensitive business discussions, customer data, internal strategy documents, and proprietary information that was never meant to leave the system.

Consider a legal firm using LangGraph to analyze case documents. Consider a healthcare organization using it to process patient queries. Consider a financial services company running AI-powered investment analysis. The conversation histories in those systems represent exactly the kind of sensitive information that regulations like GDPR, HIPAA, and SOX are designed to protect.

Vladimir Tokarev, the Cyera researcher who uncovered these flaws, put it bluntly when he noted that each vulnerability exposes a different class of enterprise data. Filesystem files through path traversal. Environment secrets through deserialization abuse. Conversation history through SQL injection. An attacker with patience could chain these techniques to build a comprehensive picture of your AI infrastructure and everything flowing through it.

The timing could not be worse for organizations that have rushed to deploy AI solutions without adequate security review. Enterprise adoption of LangChain has exploded precisely because it makes building AI applications accessible to developers who might not have deep machine learning expertise. But that accessibility comes with inherited risk.

When a vulnerability exists in LangChain's core, it does not just affect direct users. It ripples outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path. The dependency web stretching across the AI stack means hundreds of libraries wrap LangChain, extend it, or depend on it. Your organization might not even know you are running LangChain if it came bundled inside another AI tool.

The good news is that fixes have been released. CVE-2026-34070 requires updating langchain-core to version 1.2.22 or later. CVE-2025-68664 is addressed in langchain-core versions 0.3.81 and 1.2.5. CVE-2025-67644 requires updating langgraph-checkpoint-sqlite to version 3.0.1.

The bad news is that most organizations running AI workloads have no idea which version of these libraries they are actually using. AI development has moved so fast that dependency management often gets treated as an afterthought. How many of your data scientists pinned their requirements.txt to a specific version versus just grabbing whatever was latest six months ago? How many AI prototypes have quietly made their way into production without ever going through proper change management?

This disclosure comes just days after a critical vulnerability in Langflow, tracked as CVE-2026-33017 with a CVSS score of 9.3, came under active exploitation within 20 hours of public disclosure. Twenty hours. That is how fast threat actors are moving to exploit newly disclosed AI framework vulnerabilities.

The pattern emerging is clear. AI tooling is being built at a pace that outstrips security review. Developers excited about the capabilities are not always thinking about the attack surface they are creating. And attackers have noticed.

First, audit your AI deployments. Identify every system running LangChain, LangGraph, Langflow, or similar frameworks. Check the versions. Update them. This is not optional. If you do not know what AI tools your developers are experimenting with, that is a problem you need to solve today.

Second, treat your AI infrastructure with the same paranoia you apply to your production databases. These systems often have access to sensitive data and API credentials. They should not be running with default configurations or exposed to untrusted input without validation. Network segmentation, input sanitization, and least-privilege access controls all apply here.

Third, implement monitoring for unusual access patterns in your AI workloads. If someone is exploiting these vulnerabilities, you want to know before they have exfiltrated your entire conversation history.

Finally, recognize that this is not the last disclosure we will see targeting AI frameworks. The rush to deploy AI has created a massive attack surface that security teams are only beginning to understand. Building security review into your AI adoption process is no longer optional. It is survival.

The AI gold rush has everyone excited about capabilities. Maybe it is time to get equally excited about the risks.

References

Concerned about this threat?

Our security team can assess your exposure and recommend immediate actions.

Get a Free Assessment →