Crafted experiment file triggers RCE via LabOne Q deserialization
TL;DR - LabOne Q's import_cls mechanism accepts attacker-controlled fully-qualified Python class names directly from serialised experiment files. No validation, no module allowlist. A crafted file lets an attacker import and instantiate arbitrary classes with controlled constructor arguments - arbitrary code execution in the process context of whoever opened the file.
What happened
laboneq is Zurich Instruments' Python package for orchestrating quantum computing experiments. CVE-2026-7584 is an unsafe deserialisation flaw (CWE-502) in its serialisation framework.
During deserialisation, the class-loading helper import_cls dynamically imports and instantiates Python classes named in the serialised data. Before the fix, it accepted any fully-qualified class name with no module restrictions or type validation. An attacker who can get a victim to load a crafted experiment file - shared over email, a support upload, a shared drive, or a CI artefact - gets code execution in the user's Python process.
Exploitation requires the victim to open the file. That bar is not high in research and lab environments where experiment files are routinely shared for collaboration or debugging. This is the same failure mode as dozens of high-impact deserialisation CVEs before it: treating a data file as an executable by allowing untrusted type selection.
Who is impacted
- Users of the
laboneqPyPI package in the affected version ranges:
| Package | Affected versions |
|---|---|
laboneq | >= 2.41.0 and < 26.1.2 |
laboneq | 26.4.0b1 through 26.4.0b5 |
- Any environment where experiment files can cross a trust boundary: email, chat, vendor support uploads, shared drives, CI artefacts, or handoffs between teams.
- Highest risk where the Python process running
laboneqhas access to credentials, SSH keys, cloud tokens, or sensitive lab and automation infrastructure.
What to do now
- Apply the fixed release immediately.
"Update LabOne Q to version 26.1.2 (security backport on the 26.1.x line) or to 26.4.0 or later. The package can be updated via
pip install --upgrade laboneq." - Treat inbound experiment files as untrusted input, not safe data:
- Avoid loading experiment files (JSON, YAML) from unknown or unverifiable sources.
- Validate provenance for externally provided files, especially those arriving via support or collaboration flows.
- Where feasible, inspect serialised experiment files before loading and confirm only expected classes are referenced.
- Scope your exposure based on workflow:
- Identify who can supply experiment files to affected users and automation jobs.
- Review where those files are stored and how they are distributed - shared drives, artefact repositories, ticket attachments.
Additional Information
- Vendor advisory:
https://www.zhinst.com/support/security/2026/zi-sa-2026-002/
Related
Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.
