If you've been watching the rise of "agentic AI" tools, you'll know the pitch is simple: give an AI assistant the ability to actually do things, not just talk about them. Trigger workflows. Read files. Connect to services. Run tasks across your devices. It's powerful, and it's genuinely useful.
But the moment you allow an AI agent to install "skills" or plug-ins from a marketplace, you also inherit the same problem that browser extensions and app stores have dealt with for years: people will upload bad stuff, and some of it will look perfectly legit.
That's exactly the situation OpenClaw is responding to now.
What's new: every ClawHub skill gets scanned with VirusTotal
OpenClaw (previously known as Moltbot and Clawdbot) says it's partnering with VirusTotal to scan skills uploaded to ClawHub, its skill marketplace.
The idea is to add a security gate before a skill is approved and made available. According to OpenClaw's announcement, this scanning uses VirusTotal threat intelligence and also includes VirusTotal's newer "Code Insight" capability.
In plain terms: skills are now being checked against a massive threat database, plus they get deeper analysis if they don't already have a known history.
How the scanning workflow works
OpenClaw described a process that's fairly standard in malware detection workflows, but applied to "skills" bundles:
• That hash is checked against VirusTotal's database to see if the exact file has already been seen and classified
• If VirusTotal doesn't have a match, the skill is uploaded for scanning and analysis using Code Insight
Then the marketplace takes action based on the result:
• Suspicious verdict: it gets flagged with a warning
• Malicious verdict: it gets blocked from download
And importantly, it's not a one-time check. OpenClaw says active skills are re-scanned daily, which addresses a real-world problem: something that was clean yesterday can become malicious tomorrow if it's updated, swapped, or quietly modified.
"Not a silver bullet," and they're right
OpenClaw's maintainers were quick to admit an uncomfortable truth: even a strong scanning pipeline can miss things.
This is especially relevant in the agentic world, because not all attacks look like classic malware. Some can be "language-based," like prompt injection, where the payload is hidden in text that manipulates the agent's behavior rather than dropping an obvious executable.
So yes, VirusTotal scanning raises the bar. But it doesn't magically make a marketplace safe.
Why this matters now: reports of malicious skills triggered the response
This move comes after multiple reports that highlighted hundreds of malicious skills on ClawHub. Investigations described skills that appear to be helpful tools, but actually contain hidden malicious capabilities.
The kinds of behavior researchers have flagged include:
• Backdoors for remote access
• Stealer malware behavior
• Prompt-injection-based manipulation to get the agent to do unsafe actions
OpenClaw has also added a reporting option so signed-in users can flag suspicious skills, which is basically the marketplace equivalent of "Report this extension."
The deeper issue: agents aren't just software, they're "software with hands"
Traditional apps follow instructions written in code. Agentic systems interpret natural language and decide what actions to take. That creates a weird new security gap: the instruction can be the prompt, and prompts don't look like malware in the way antivirus expects.
Cisco summed it up in a way security folks immediately recognize: agents can become covert data-leak channels that slip past standard DLP, proxies, and endpoint monitoring, and they can also act like execution orchestrators where language becomes the trigger.
Backslash Security has even described OpenClaw as "AI with hands." It's a dramatic phrase, but it's accurate. If a skill has access to your services and files, then a malicious skill isn't just annoying. It's potentially catastrophic.
Moltbook and the "viral agent ecosystem" problem
OpenClaw's popularity also connects to a broader ecosystem, including Moltbook, a social platform where autonomous agents interact in a Reddit-style environment.
That combination raises alarms because it creates an environment where:
• skills increase the attack surface
• the agent has real privileges and real credentials
• malicious content can spread quickly and influence other agents
Security researchers have referred to these kinds of combined risks as the "Lethal Trifecta" in agent ecosystems, where autonomy, tool access, and untrusted inputs collide.
Shadow AI is the quiet enterprise nightmare
One of the scariest parts of this story isn't even the marketplace itself. It's the way these tools can spread inside companies.
People install them because they're helpful. They connect them to accounts because it saves time. And sometimes they do all of that without any formal IT approval.
That's the definition of Shadow AI risk: powerful autonomous tooling running on employee endpoints with elevated privileges, operating outside the normal security playbook.
As one researcher put it, the question isn't whether these tools will show up in an organisation, it's whether the organisation will even know they're there.
The long list of security issues being discussed
Alongside the malicious skills issue, researchers have also raised a pile of other concerns around OpenClaw and related services, including:
• insecure defaults that bind gateways to all network interfaces
• plaintext credential storage and weak uninstall cleanup
• remote-code-execution style risks in UI components
• indirect prompt injections hidden in documents or web pages
• marketplace cloning tactics where malicious skills reappear under slightly different names
• reports of exposed databases and leaked API keys in adjacent platforms
Not every claim applies to every user, and some issues have been patched. But collectively, it paints a picture: the ecosystem is moving faster than its security maturity.
What OpenClaw says it will do next
Beyond VirusTotal scanning, OpenClaw says it plans to publish:
• a public security roadmap
• a formal security reporting process
• details of a security audit for the entire codebase
Those are the right moves, at least on paper. The challenge will be execution and transparency, because trust in a marketplace is earned slowly and lost instantly.
Final thoughts
VirusTotal scanning on ClawHub is a solid, overdue step, especially after reports of malicious skills hiding in plain sight. But the bigger takeaway is this: agent marketplaces are higher-stakes than app stores or browser extensions, because an AI agent often holds credentials to your digital life and can act on your behalf. In that world, one bad skill isn't just one bad download, it's potentially a shortcut into every service your agent can reach.


Comments