ClawSafety: "Safe" LLMs, Unsafe Agents
Bowen Wei 1, Yunbei Zhang 2, Jinhao Pan 1, Kai Mei 3, Xiao Wang 4, Jihun Hamm 2, Ziwei Zhu 1, Yingqiang Ge 3
Published on arXiv
2604.01438
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
Sonnet 4.6 achieves 40% ASR with 0% success on credential/destructive actions, while other models reach 55-75% ASR; scaffold choice shifts ASR by 8.6pp
ClawSafety
Novel technique introduced
Personal AI agents like OpenClaw run with elevated privileges on users' local machines, where a single successful prompt injection can leak credentials, redirect financial transactions, or destroy files. This threat goes well beyond conventional text-level jailbreaks, yet existing safety evaluations fall short: most test models in isolated chat settings, rely on synthetic environments, and do not account for how the agent framework itself shapes safety outcomes. We introduce CLAWSAFETY, a benchmark of 120 adversarial test scenarios organized along three dimensions (harm domain, attack vector, and harmful action type) and grounded in realistic, high-privilege professional workspaces spanning software engineering, finance, healthcare, law, and DevOps. Each test case embeds adversarial content in one of three channels the agent encounters during normal work: workspace skill files, emails from trusted senders, and web pages. We evaluate five frontier LLMs as agent backbones, running 2,520 sandboxed trials across all configurations. Attack success rates (ASR) range from 40\% to 75\% across models and vary sharply by injection vector, with skill instructions (highest trust) consistently more dangerous than email or web content. Action-trace analysis reveals that the strongest model maintains hard boundaries against credential forwarding and destructive actions, while weaker models permit both. Cross-scaffold experiments on three agent frameworks further demonstrate that safety is not determined by the backbone model alone but depends on the full deployment stack, calling for safety evaluation that treats model and framework as joint variables.
Key Contributions
- 120-scenario benchmark organized by harm domain, attack vector, and task domain across realistic high-privilege workspaces
- Evaluation of 5 frontier LLMs across 2,520 trials revealing ASR 40%-75% with skill injection most dangerous
- Cross-scaffold experiments showing safety depends on model-framework pair, with 8.6pp ASR variation across frameworks