OpenClaw vulnerabilities exposed by AI-powered code scanner
AI-powered security testing uncovers six critical vulnerabilities in OpenClaw, including SSRF and authentication bypass flaws.
Researchers at Endor Labs identified six high- to critical vulnerabilities in the open-source AI agent framework OpenClaw using an AI-powered static application security testing engine to trace untrusted data flows. The flaws included server-side request forgery, authentication bypass, and path traversal.
The bugs affected multiple components of the agentic system, which integrates large language models with external tools and web services. Several SSRF issues were found in the gateway and authentication modules, potentially exposing internal services or cloud metadata depending on the deployment context.
Access control failures were also found in OpenClaw. A webhook handler lacked proper verification, enabling forged requests, while another flaw allowed unauthenticated access to protected functionality. Researchers confirmed exploitability with proof-of-concept demonstrations.
The team said that traditional static analysis tools struggle with modern AI software stacks, where inputs undergo multiple transformations before reaching sensitive operations. Their AI-based SAST engine preserved context across layers, tracing untrusted data from entry points to critical functions.
OpenClaw maintainers were notified through responsible disclosure and have since issued patches and advisories. Researchers argue that as AI agent frameworks expand into enterprise environments, security analysis must adapt to address both conventional vulnerabilities and AI-specific attack surfaces.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
