defense 2026

Don't Let the Claw Grip Your Hand: A Security Analysis and Defense Framework for OpenClaw

Zhengyang Shan , Jiayun Xin , Yue Zhang , Minghui Xu

0 citations

α

Published on arXiv

2603.10387

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

HITL defense layer intercepts up to 8 severe attacks that fully bypassed native defenses, improving overall defense rate from 17% to a range of 19–92% across six LLM backends.

Human-in-the-Loop (HITL) Defense Layer

Novel technique introduced


Code agents powered by large language models can execute shell commands on behalf of users, introducing severe security vulnerabilities. This paper presents a two-phase security analysis of the OpenClaw platform. As an open-source AI agent framework that operates locally, OpenClaw can be integrated with various commercial large language models. Because its native architecture lacks built-in security constraints, it serves as an ideal subject for evaluating baseline agent vulnerabilities. First, we systematically evaluate OpenClaw's native resilience against malicious instructions. By testing 47 adversarial scenarios across six major attack categories derived from the MITRE ATLAS and ATT\&CK frameworks, we have demonstrated that OpenClaw exhibits significant inherent security issues. It primarily relies on the security capabilities of the backend LLM and is highly susceptible to sandbox escape attacks, with an average defense rate of only 17\%. To mitigate these critical security gaps, we propose and implement a novel Human-in-the-Loop (HITL) defense layer. We utilize a dual-mode testing framework to evaluate the system with and without our proposed intervention. Our findings show that the introduced HITL layer significantly hardens the system, successfully intercepting up to 8 severe attacks that completely bypassed OpenClaw's native defenses. By combining native capabilities with our HITL approach, the overall defense rate improves to a range of 19\% to 92\%. Our study not only exposes the intrinsic limitations of current code agents but also demonstrates the effectiveness of human-agent collaborative defense strategies.


Key Contributions

  • Systematic attack surface analysis of LLM code agents across six threat categories (encoding evasion, sandbox escape, indirect prompt injection, supply chain, resource exhaustion, privilege escalation) using 47 adversarial scenarios derived from MITRE ATLAS/ATT&CK
  • Novel Human-in-the-Loop (HITL) defense layer combining an allowlist, pattern-based risk classification (35 rules), semantic intent judge, and mandatory human approval for high-risk tool calls
  • Dual-mode evaluation framework showing HITL raises overall defense rates from a 17% baseline to 19–92% across six LLM backends, with sandbox escape identified as the most critical unresolved vulnerability

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
47 adversarial scenarios (custom, MITRE ATLAS/ATT&CK-derived)
Applications
llm code agentsshell execution agentsai-powered developer tools