Feb 14, 2026
OpenClaw's Security Nightmare Proves Why AI Agent Monitoring Is Non-Negotiable
Here's what just happened: 30,000 AI agents got exposed to the internet, and one click could have given attackers full control of your machine
The AI agent revolution just hit its first major security crisis. And if you are running OpenClaw, Claude Code, or any autonomous AI assistant without proper monitoring, you need to read this immediately.
On January 30, 2026, the OpenClaw team released version 2026.1.29. It was not a feature update. It was a critical security patch for CVE-2026-25253, a vulnerability with a CVSS score of 8.8 that allowed one-click remote code execution through nothing more than a malicious link. Click one wrong URL, and an attacker could gain full operator-level access to your system, disable your safety guardrails, and execute arbitrary commands on your host machine.
This is not theoretical. Security researchers at BitSight identified over 30,000 exposed OpenClaw instances on the open internet during a single analysis period from January 27 to February 8, 2026. That is 30,000 potential entry points for attackers. 30,000 systems where AI agents have been granted broad access to user data, accounts, and infrastructure.
The creator of OpenClaw, Peter Steinberger, has been refreshingly honest about the risks: "Most non-techies should not install this." But here is the uncomfortable truth: even technical users are deploying these tools without the observability infrastructure to detect when things go wrong.
That is where Rectify comes in. But before we get to the solution, let us understand exactly how we got here.
The OpenClaw Explosion: From Hobby Project to 149,000 GitHub Stars
OpenClaw (formerly Clawdbot, formerly Moltbot) represents something genuinely new in the AI landscape. Unlike cloud-based assistants where your data lives on someone else's servers, OpenClaw runs locally on your machine. It integrates with the messaging platforms you already use: WhatsApp, Telegram, Discord, Slack, Microsoft Teams. You DM it like a friend, and it executes tasks on your behalf.
The promise is seductive. Imagine opening Slack on Monday morning to find a message: "I noticed your staging server was running low on disk space, so I cleaned it up and pushed the new build. I also replied to a few emails in your personal inbox and booked dinner for you and your partner at 7PM."
One assistant. Always available. Capable of carrying out real actions across your entire digital life.
The project has captured the imagination of the developer community like nothing else. It crossed 100,000 GitHub stars faster than any major open-source project in recent memory. As of February 2026, it sits at 149,000 stars and climbing.
But explosive growth creates explosive risk.
CVE-2026-25253: Anatomy of a One-Click RCE
The vulnerability discovered by Mav Levin at Depth Security is elegant in its simplicity and terrifying in its impact.
Here is how it worked:
OpenClaw's Control UI trusted the gatewayUrl parameter from the query string without validation. When you loaded the UI, it would auto-connect and send your stored gateway token in the WebSocket connection payload. The server did not validate the WebSocket origin header, meaning it would accept requests from any website.
An attacker could craft a malicious web page that, when visited, would:
Extract your authentication token through client-side JavaScript
Establish a WebSocket connection to your local OpenClaw instance
Use the stolen token to bypass authentication
Disable user confirmation requirements by setting "exec.approvals.set" to "off"
Escape the Docker container by setting "tools.exec.host" to "gateway"
Execute arbitrary commands directly on your host machine
The entire exploit chain completes in milliseconds. One click. Full compromise.
The token's privileged scopes, operator.admin and operator.approvals, gave attackers god-mode access. They could modify configuration, invoke privileged actions, and effectively own the system.
Steinberger noted in the security advisory that "the vulnerability is exploitable even on instances configured to listen on localhost only." This is the particularly nasty part. Even users who thought they were safe because they had not exposed their instance to the internet were vulnerable through cross-site WebSocket hijacking.
The 30,000 Exposed Instances Problem
BitSight's research revealed a staggering statistic: over 30,000 OpenClaw instances were exposed to the open internet during their observation window.
Think about what that means. Each of these instances represents a system where someone has:
Deliberately chosen to run an AI agent with broad system access
Likely integrated it with sensitive accounts and services
Possibly exposed it to the internet without proper security controls
This is not a knock on OpenClaw specifically. This is a pattern we see repeatedly with powerful new tools. The default configuration makes it easy to get started. The powerful configuration makes it easy to shoot yourself in the foot.
And OpenClaw is uniquely positioned to cause damage because of its architecture. It is designed to have "omnipotent control over whatever you integrate it with," as BitSight researchers put it. When you grant OpenClaw access to your email, your calendar, your servers, your databases, you are giving an AI agent the keys to your digital kingdom.
That is fine when the AI is doing what you expect. It is catastrophic when an attacker takes control.
The Malicious Skills Problem: 17% of ClawHub Extensions
The CVE-2026-25253 vulnerability was not the only security issue facing OpenClaw users in early 2026.
Bitdefender Labs conducted an analysis of ClawHub, OpenClaw's marketplace of community-built extensions. They found that approximately 17% of the 3,000+ skills exhibited malicious behavior.
Let that sink in. Nearly one in five community extensions had red flags that suggested they could be used to compromise users.
Attackers also exploited the confusion around OpenClaw's multiple rebranding phases. The project started as Clawdbot, became Moltbot, and finally settled on OpenClaw. During these transitions, malicious actors created typosquat domains and cloned repositories to distribute potentially compromised versions of the software.
Steinberger has implemented defensive measures in response. ClawHub now requires GitHub accounts that have been active for at least a week to upload skills. Users can flag malicious skills for review. These are good steps, but they are reactive, not proactive.
Why Traditional Security Tools Fail with AI Agents
Here is the fundamental problem: AI agents break the traditional security model.
Most security tools are designed around human actors. They monitor for human-like behavior patterns, detect when humans access sensitive resources, and alert when human users do suspicious things.
AI agents do not behave like humans. They:
Operate at machine speed, completing in seconds what would take humans minutes
Access resources in patterns that look automated (because they are)
Make decisions based on probabilistic models rather than explicit logic
Can be compromised through prompt injection, tool poisoning, or supply chain attacks
Traditional monitoring tools see an AI agent executing a series of commands and either ignore it (because it looks like automation) or generate noise (because the patterns do not match human behavior).
What you need is monitoring designed specifically for AI agents. You need to see what tools your agent is invoking, what data it is accessing, and what actions it is taking on your behalf. You need session replay for AI operations. You need to know when your agent starts behaving outside its normal parameters.
The Rectify Approach: Monitoring for the AI Agent Era
Rectify was built for exactly this moment.
We saw the wave of AI agents coming. We understood that giving AI systems access to production infrastructure, customer data, and business-critical operations without proper observability was a recipe for disaster.
Here is how Rectify solves the AI agent monitoring problem:
Session Replay for AI Operations
Just as session replay tools let you watch exactly what users did on your web application, Rectify lets you replay exactly what your AI agent did. Every tool invocation. Every API call. Every file access. Every decision point.
When something goes wrong, you do not have to guess. You can see the full context.
Real-Time Anomaly Detection
Rectify establishes baselines for your AI agents' normal behavior. When an agent starts accessing unusual resources, invoking unexpected tools, or operating outside its typical parameters, you get alerted immediately.
If your OpenClaw instance suddenly starts executing shell commands at 3 AM from an unfamiliar IP address, you will know within seconds, not days.
Audit Logging for Compliance
For organizations in regulated industries, AI agent operations need to be auditable. Rectify maintains complete logs of every action your agents take, with tamper-proof timestamps and cryptographic verification.
When your compliance team asks "exactly what did the AI do with customer data," you will have the answer.
Integration with Your Existing Stack
Rectify does not require you to rip and replace your current monitoring infrastructure. We integrate with the tools you already use: Slack, PagerDuty, Datadog, Splunk, and more.
Get AI agent observability without disrupting your existing workflows.
What You Should Do If You Are Running OpenClaw
If you are currently running OpenClaw (or any autonomous AI agent), here is your immediate action plan:
1. Update to Version 2026.1.29 or Later
This is non-negotiable. The CVE-2026-25253 vulnerability is severe and easily exploitable. If you have not updated, do it now.
2. Audit Your Network Exposure
Check whether your OpenClaw instance is accessible from the internet. If it is, ask yourself why. Default configurations should keep these services local-only. If you need remote access, use a VPN or secure tunnel, not direct exposure.
3. Review Your Installed Skills
Go through your ClawHub extensions. Remove anything you do not actively use. Research the authors of the skills you keep. Remember: 17% of skills showed malicious indicators in Bitdefender's analysis.
4. Implement Monitoring
You need visibility into what your AI agent is doing. Without monitoring, you are flying blind. Set up logging, alerting, and ideally session replay for your agent's operations.
5. Enable Approval Requirements
Do not disable user confirmation requirements. Yes, it makes the AI feel more magical when it just "does things" without asking. But that magic becomes a nightmare when an attacker takes control.
The Broader Lesson: AI Agents Require AI-Grade Monitoring
The OpenClaw security crisis is a preview of what is coming.
AI agents are moving from experimental toys to production infrastructure. Salesforce's 2026 State of Sales report found that 54% of sales teams are now using AI agents for prospecting and automation. Anthropic's Cowork platform is bringing customizable agentic workflows to marketing, legal, and support teams. OpenAI's Frontier service is helping enterprises build and manage AI agents integrated with their existing systems.
This is not a fad. This is the new operating model for knowledge work.
But every time we give AI systems more power and access, we increase the blast radius of security failures. An AI agent with access to your email, your calendar, your servers, and your databases is not just a productivity tool. It is a potential single point of compromise for your entire digital life.
The teams that win in the AI era will not be the ones that deploy the most agents. They will be the ones that deploy agents with the right observability, security controls, and monitoring infrastructure.
Rectify Is Building the Operating System for AI Agent Observability
We are not just building a monitoring tool. We are building the infrastructure that makes AI agents safe to deploy at scale.
Our roadmap includes:
Agent Behavior Profiling: Machine learning models that understand the difference between normal AI agent behavior and compromised behavior
Cross-Agent Correlation: Detect when multiple agents in your environment are being coordinated by an attacker
Automated Response: Trigger containment actions when anomalies are detected, not just alerts
Compliance Frameworks: Pre-built monitoring templates for SOC 2, GDPR, HIPAA, and other regulatory requirements
The future of work is AI agents working alongside humans. The future of security is monitoring those agents with the same rigor we apply to human actors.
Rectify is that future.
Final Thoughts: The Cost of Invisibility
The OpenClaw vulnerability was patched quickly. The creator was responsive and transparent. The community rallied to improve security practices.
But the underlying issue remains: most AI agent deployments have zero observability. When things go wrong, teams have no way to understand what happened, when it started, or how to prevent it next time.
That is unacceptable for production systems.
If you are building with AI agents, you need to be monitoring with Rectify. Not tomorrow. Not after your first incident. Today.
Because the next CVE might not be discovered by a friendly security researcher. It might be exploited by an attacker who found your exposed instance on Shodan and decided to see what they could access.
Do not let your AI agent become someone else's backdoor.
Ready to secure your AI agent operations? Get started with Rectify today and get full observability into what your agents are actually doing. Your future self will thank you when the next security crisis hits.
Additional Resources:




