survey 2026

Taming OpenClaw: Security Analysis and Mitigation of Autonomous LLM Agent Threats

Xinhao Deng 1,2, Yixiang Zhang 2, Jiaqing Wu 2, Jiaqi Bai 2, Sibo Yi 2, Zhuoheng Zou 2, Yue Xiao 2, Rennai Qiu 2, Jianan Ma 1, Jialuo Chen 1, Xiaohu Du 1, Xiaofang Yang 1, Shiwen Cui 1, Changhua Meng 1, Weiqiang Wang 1, Jiaxing Song 2, Ke Xu 2, Qi Li 2

0 citations

α

Published on arXiv

2603.11619

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Current point-based defense mechanisms are insufficient against cross-temporal and multi-stage systemic risks in autonomous LLM agents, necessitating holistic security architectures.


Autonomous Large Language Model (LLM) agents, exemplified by OpenClaw, demonstrate remarkable capabilities in executing complex, long-horizon tasks. However, their tightly coupled instant-messaging interaction paradigm and high-privilege execution capabilities substantially expand the system attack surface. In this paper, we present a comprehensive security threat analysis of OpenClaw. To structure our analysis, we introduce a five-layer lifecycle-oriented security framework that captures key stages of agent operation, i.e., initialization, input, inference, decision, and execution, and systematically examine compound threats across the agent's operational lifecycle, including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift. Through detailed case studies on OpenClaw, we demonstrate the prevalence and severity of these threats and analyze the limitations of existing defenses. Our findings reveal critical weaknesses in current point-based defense mechanisms when addressing cross-temporal and multi-stage systemic risks, highlighting the need for holistic security architectures for autonomous LLM agents. Within this framework, we further examine representative defense strategies at each lifecycle stage, including plugin vetting frameworks, context-aware instruction filtering, memory integrity validation protocols, intent verification mechanisms, and capability enforcement architectures.


Key Contributions

  • Five-layer lifecycle-oriented security framework (initialization, input, inference, decision, execution) for systematically analyzing autonomous LLM agent threats
  • Comprehensive compound threat analysis of OpenClaw including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift via detailed case studies
  • Examination of stage-wise defense strategies and identification of critical weaknesses in current point-based defenses against cross-temporal, multi-stage systemic risks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
autonomous llm agentsinstant-messaging agent systems