JustAppSec
Back to news

CrewAI patches prompt-injection paths to RCE and SSRF

2 min readPublished 09 Apr 2026Updated 09 Apr 2026Source: Japan Vulnerability Notes (JVN / JPCERT/CC)

TL;DR — JVN warns that prompt injection against CrewAI agents can trigger tool-assisted RCE, SSRF into internal/cloud services, and local file reads; treat agent toolchains as an application attack surface and apply vendor fixes/workarounds.

What happened

CrewAI is a Python framework for building multi-agent AI systems, commonly used to wire LLM “agents” to tools (code execution, retrieval/RAG, and file/JSON handling) to perform tasks.

JVN (JPCERT/CC) published an advisory describing multiple CrewAI vulnerabilities that become high impact when an attacker can influence what the agent processes (directly or indirectly) via prompt injection. The advisory calls out four CVEs spanning code execution, SSRF, and arbitrary file read paths:

CVEComponent / condition (per JVN)Impact (per JVN)Fix notes (per JVN)
CVE-2026-2275CodeInterpreterTool uses SandboxPython when Docker is unavailableRemote code executionFixed in CrewAI 1.11.0
CVE-2026-2286Multiple RAG search tools do not properly validate URLsSSRF (fetch content from internal/cloud services)Not specified in the JVN advisory
CVE-2026-2287Docker running state may be checked incorrectly, leading to sandbox useRemote code executionFixed in CrewAI 1.11.0
CVE-2026-2285JSON loading does not validate input pathsArbitrary local file readNot specified in the JVN advisory

This is notable because it reinforces a pattern platform teams keep relearning: agent tool adapters (code execution, retrieval, file access) collapse traditional trust boundaries. Once a prompt can steer tool invocation, “LLM behavior” turns into concrete application-layer exploitability (RCE/SSRF/file read) with real blast radius in CI runners and backend services.

Who is impacted

  • Organizations running CrewAI where agents process untrusted input (user prompts, tickets, emails, chat messages, docs, web content) that could carry direct or indirect prompt injection.
  • Deployments using CodeInterpreterTool and/or explicitly enabling code execution.
  • Deployments using CrewAI’s RAG/search tools that fetch URLs on behalf of the agent.
  • Services that run CrewAI with access to sensitive local files, environment variables, or instance metadata—making SSRF and file reads immediately operational.

Per JVN, the risk is framed explicitly around prompt injection leading to:

  • Remote code execution (CVE-2026-2275, CVE-2026-2287)
  • Server-side request forgery (CVE-2026-2286)
  • Arbitrary file read (CVE-2026-2285)

What to do now

  • Follow vendor remediation guidance and apply the latest patched release available at the time of writing.

    "開発者が提供する情報をもとに最新版へアップデートしてください。CVE-2026-2275とCVE-2026-2287については、CrewAI 1.11.0で修正されています。"

  • Apply the JVN-listed workarounds to reduce exposure where immediate upgrading isn’t feasible:
    • Remove, restrict, or disable CodeInterpreterTool.

      "CodeInterpreterToolを削除、制限、無効化する。"

    • Do not set allow_code_execution=True.

      "allow_code_execution=True を設定しない。"

    • Prevent untrusted input from reaching agents, or sanitize input appropriately.

      "エージェントに信頼できない入力が渡らないようにするか、入力を適切に無害化する。"

    • Ensure Docker is always running so the sandbox is not used.

      "Dockerが常に稼働していることを確認し、サンドボックスが使用されないようにする"

  • If you suspect compromise, treat the CrewAI runtime as potentially exposed: review agent/tool invocation logs (especially code execution and outbound fetches), and rotate any credentials reachable by the agent process.

Additional Information


Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.

Need help?Get in touch.