JustAppSec
Back to news

575+ trojanized AI skills found on ClawHub in active supply chain attack

3 min readPublished 30 Apr 2026Source: Acronis Threat Research Unit

TL;DR - Acronis found 575+ trojanized OpenClaw skills across 13 accounts on clawhub.ai. The skills use indirect prompt injection to make agents execute attacker-controlled instructions - running encoded commands, pulling password-protected archives, and installing unverified binaries. Hugging Face repositories are also being used as payload staging infrastructure.

What happened

On April 30, 2026, the Acronis Threat Research Unit published findings on active, in-the-wild abuse of two AI distribution platforms: Hugging Face (model and dataset hosting) and ClawHub (the extension marketplace for OpenClaw skills).

This is not a platform breach. The attack is content-level supply chain abuse combined with social engineering. Trojanized artifacts - skills, repositories, shared files - are dressed up to look legitimate. The victim, or the agent acting on their behalf, is then nudged into running encoded commands, downloading password-protected archives, or installing hidden dependencies.

The OpenClaw angle is the most dangerous for developer environments. Acronis identified 575+ malicious skills spread across 13 developer accounts on clawhub.ai. These skills masquerade as useful tooling but instruct users - or agents - to run encoded commands or install unverified binaries.

The technique underpinning this is indirect prompt injection: hidden instructions embedded in content that an agent reads can redirect the agent into executing malicious actions, appearing to act on the user's behalf. The agent is not compromised. It is following instructions - just not yours.

Why this is worth taking seriously now: agents and automation pipelines are increasingly pulling models, datasets, and skills directly into build and runtime environments. When they do, those artifacts have execution paths. Trusting them by default is the same mistake the industry made with package registries, repeatedly, for years.

Who is impacted

  • Teams using OpenClaw and installing community skills from clawhub.ai, particularly where the agent can spawn processes or execute external code with meaningful host privileges.
  • Organisations pulling models, datasets, or code from Hugging Face into developer workflows without provenance controls or sandboxing.
  • Any environment where developers follow README-style install instructions - download this archive, run this one-liner, install this driver - without independent verification.
SurfaceReported activityWhy engineers should care
clawhub.ai OpenClaw skills575+ malicious skills across 13 accountsSkills can trigger installs and execution paths that reach the host filesystem and developer secrets
huggingface.co repositoriesUsed as payload hosting and staging for multi-step infection chainsCI jobs and developer tooling increasingly treat Hugging Face as a trusted artifact source for AI workloads

What to do now

  • Treat AI artifacts as untrusted inputs by default. The Acronis report is explicit:

    "Download AI models, skills and tools only from verified sources and official repositories. Avoid downloading and executing files distributed through password-protected archives or unverified binaries from third-party links."

  • Enforce least privilege for agent capabilities:
    • Restrict what the agent can do to the minimum required for the task.
    • Prevent arbitrary process spawning or script execution unless explicitly required and reviewed.
  • Add detection for agent-driven execution on developer endpoints and CI runners:
    • Encoded shell or PowerShell execution.
    • In-memory injection patterns.
    • Suspicious outbound HTTPS to newly registered domains.
    • Unexpected scheduled task creation.
  • Establish an approved list for AI tooling:
    • Document which models, datasets, and skills are sanctioned for use.
    • Require review before any new third-party model or agent extension enters production workflows.
  • Train your team to recognise social engineering in the AI ecosystem. Flag these as stop-and-verify events:
    • "Download this password-protected archive."
    • "Run this base64 one-liner."
    • "Install this driver from GitHub."

Additional information

The Acronis report includes an IOC section with SHA256 hashes and URLs covering both OpenClaw and Hugging Face-related samples, plus a list of clawhub.ai developer accounts associated with the malicious skill uploads.

Related


Content is AI-assisted and reviewed by our team, but issues may be missed and best practices evolve rapidly, send corrections to [email protected]. Always consult official documentation and validate key implementation decisions before making design or security choices.

Need help?Get in touch.