OpenClaw said it is integrating Google-owned VirusTotal to scan skills published to ClawHub, its marketplace for extending agent capabilities.
OpenClaw is an open-source AI agent that can take actions across apps and systems using large language models, with users extending what it can do through installable “skills,” many of which are distributed via ClawHub, its public skills marketplace. It was previously named Clawdbot, then Moltbot, then renamed to OpenClaw.
In its announcement, OpenClaw founder Peter Steinberger and co-authors Jamieson O’Reilly and Bernardo Quintero said the change is now live and is meant to add a security layer for the project’s fast-growing ecosystem.
The company described a workflow that fingerprints every uploaded skill bundle with a SHA-256 hash, checks that hash against VirusTotal’s corpus and uploads the bundle for deeper analysis when there is no match or no prior AI analysis.
VirusTotal’s Code Insight then produces a security-focused assessment of what the skill’s code does, not just what it claims to do.
According to the firm, ClawHub will automatically approve skills receiving a “benign” Code Insight verdict, add warnings for “suspicious” results and block downloads for skills deemed malicious. The company also said it will rescan all active skills daily to catch cases where a previously clean skill becomes malicious later.
Why OpenClaw is tightening skill supply chain checks
The move follows a wave of security research arguing that agent “skills” and integrations are becoming a new distribution channel for malware and credential theft. VirusTotal said it has detected “hundreds” of actively malicious OpenClaw skills in recent days, describing the ecosystem as an emerging supply-chain attack surface.
Separately, Snyk said it scanned 3,984 skills on ClawHub and found 283 skills it categorized as having “critical” security flaws that exposed sensitive credentials. In a second write-up, Snyk said a larger set of skills contained security issues and that its human-in-the-loop review confirmed malicious payloads in some agent-skill instructions.
OpenClaw also tied its VirusTotal approach to behavioral analysis rather than signature matching alone, saying it uploads full skill bundles for Code Insight review.
“Not a silver bullet”: what the maintainers say scanning will miss
The cyber firm cautioned that VirusTotal scanning will not eliminate risk, including scenarios where malicious skills embed cleverly concealed prompt-injection payloads. The company said it plans to publish a comprehensive threat model, a public security roadmap, a formal security reporting process and details about a security audit of its codebase.
That limitation lines up with recent demonstrations showing how indirect prompt injection can steer agent behavior using untrusted content the agent is asked to read. Zenity Labs described scenarios where indirect prompt injection can establish persistence and backdoor-like control without exploiting a traditional software flaw.
HiddenLayer published an example in which injected instructions embedded in a web page lead OpenClaw to write attacker-controlled instructions into a HEARTBEAT.md file and wait for commands.
The bigger enterprise signal: agents concentrate three risk ingredients
Security researchers have pointed to a broader “lethal trifecta” risk pattern for AI agents, a concept Simon Willison has used to describe systems that combine access to private data, exposure to untrusted content and the ability to communicate externally. Willison has separately argued that OpenClaw setups often place that combination “in play” when users connect agents to private accounts and data.
Cisco, in a post focused on why enterprises should care about personal agent tools, argued that agents with system access can become covert data-leak channels that bypass traditional controls such as DLP and endpoint monitoring, particularly when malicious skills execute successfully.
The concerns are not limited to marketplaces. They also extend to Moltbook, a separate social network where OpenClaw-built agents can post and interact.
Wiz said a misconfigured Supabase database at Moltbook, an AI-agent social network built around agent interactions, exposed 1.5 million API keys and other sensitive data, enabling full agent takeover scenarios tied to authentication tokens and messaging.
Regulators are also starting to weigh in. China’s MIIT-run National Vulnerability Database (NVDB) issued an alert warning that OpenClaw deployments can carry elevated security risks under default or improper configurations and urged users to audit public exposure and strengthen identity authentication and access controls.