From Assistant to Double Agent: Formalizing and Benchmarking Attacks on OpenClaw for Personalized Local AI Agent
Yuhang Wang 1, Feiming Xu 1, Zheng Lin 1, Guangyu He 1, Yuzhe Huang 1, Haichang Gao 1, Zhenxing Niu 1, Shiguo Lian 2, Zhaoxiang Liu 2
Published on arXiv
2602.08412
Prompt Injection
OWASP LLM Top 10 — LLM01
Insecure Plugin Design
OWASP LLM Top 10 — LLM07
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
OpenClaw exhibits critical vulnerabilities at all execution stages — user prompt processing, tool usage, and memory retrieval — enabling information leakage, unsafe tool invocations, and persistent behavioral manipulation in personalized deployments
PASB (Personalized Agent Security Bench)
Novel technique introduced
Although large language model (LLM)-based agents, exemplified by OpenClaw, are increasingly evolving from task-oriented systems into personalized AI assistants for solving complex real-world tasks, their practical deployment also introduces severe security risks. However, existing agent security research and evaluation frameworks primarily focus on synthetic or task-centric settings, and thus fail to accurately capture the attack surface and risk propagation mechanisms of personalized agents in real-world deployments. To address this gap, we propose Personalized Agent Security Bench (PASB), an end-to-end security evaluation framework tailored for real-world personalized agents. Building upon existing agent attack paradigms, PASB incorporates personalized usage scenarios, realistic toolchains, and long-horizon interactions, enabling black-box, end-to-end security evaluation on real systems. Using OpenClaw as a representative case study, we systematically evaluate its security across multiple personalized scenarios, tool capabilities, and attack types. Our results indicate that OpenClaw exhibits critical vulnerabilities at different execution stages, including user prompt processing, tool usage, and memory retrieval, highlighting substantial security risks in personalized agent deployments. The code for the proposed PASB framework is available at https://github.com/AstorYH/PASB.
Key Contributions
- PASB: an end-to-end black-box security evaluation framework modeling personalized usage scenarios, private assets (honey tokens, confidential files), and realistic high-privilege toolchains for real deployed LLM agents
- Systematic multi-stage vulnerability analysis of OpenClaw covering prompt processing, tool usage, and memory retrieval execution stages with automated adjudication of harm criteria
- Characterization of cross-stage attack propagation in long-horizon personalized agent interactions, moving evaluation beyond output-level analysis to action-chain and system-level risk assessment