JustAppSec
Back to news

Keras safe_mode bypass enables code execution via .keras models

2 min readPublished 13 Apr 2026Updated 13 Apr 2026Source: CVEProject (cvelistV5)

TL;DR — Keras safe_mode=True can be bypassed, allowing attacker-controlled .keras models to trigger code execution during model deserialization under the victim’s privileges.

What happened

Keras is a widely used Python deep-learning library used to build, serialize, and load models for training and inference. CVE-2026-1462 describes a safe mode bypass in the TFSMLayer class where attacker-controlled TensorFlow SavedModels can be loaded during deserialization of .keras model files even when safe_mode=True.

The CVE record states this bypass can enable arbitrary attacker-controlled code execution during model inference under the victim’s privileges. The described root causes include unconditional loading of external SavedModels, serialization of attacker-controlled file paths, and lack of validation in from_config().

ItemSource value
VulnerabilitySafe mode bypass leading to code execution during deserialization
CWECWE-502 Deserialization of Untrusted Data
SeverityCVSS v3.0 8.8 (High)
CVSS vectorCVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Affected versions (CVE record)keras versions less than 3.13.2
Patch reference (CVE record)Upstream commit b6773d3decaef1b05d8e794458e148cb362f163f

This is a high-signal issue because “safe” deserialization toggles are frequently treated as a hard boundary in ML supply chains; bypasses like this re-open the risk of model artifacts becoming an RCE delivery vehicle in CI, batch inference, and model registry workflows.

Who is impacted

  • Services that load .keras models in production or CI/CD where model artifacts can be influenced by untrusted sources (third-party model downloads, customer uploads, shared registries).
  • Environments using keras in the CVE’s affected range (the CVE record lists versions less than 3.13.2 as affected).
  • Higher-risk pipelines that automatically fetch and load models (or run “evaluation”/“inference” jobs) without strong provenance controls, because exploitation happens at load/deserialization time.

What to do now

  • Follow vendor remediation guidance and move to a release not listed as affected in the CVE record (the CVE lists keras versions less than 3.13.2 as affected).
  • Treat .keras model files as untrusted code unless they come from a trusted, integrity-verified source; avoid loading untrusted model artifacts in sensitive environments.
  • Audit where model loading happens (offline batch jobs, inference APIs, evaluation pipelines, notebooks, CI runners) and identify paths where an attacker could introduce or swap a .keras artifact.
  • If you must handle untrusted model artifacts, isolate model loading/inference in a tightly sandboxed environment (least-privilege runtime identity, constrained filesystem/network, separate credentials) and monitor for anomalous file access or process behavior around model-load events.

Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.

Need help?Get in touch.