benchmark 2026

A Trajectory-Based Safety Audit of Clawdbot (OpenClaw)

Tianyu Chen 1, Dongrui Liu 2, Xia Hu 2, Jingyi Yu 1, Wenjie Wang 1

0 citations · 29 references · arXiv (Cornell University)

α

Published on arXiv

2602.14364

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Across 34 canonical cases, Clawdbot shows a non-uniform safety profile: generally reliable on structured tasks but prone to failure under underspecified intent and jailbreak prompts where tool fan-out escalates errors into irreversible consequences.


Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises heightened safety and security concerns under ambiguity and adversarial steering. We present a trajectory-centric evaluation of Clawdbot across six risk dimensions. Our test suite samples and lightly adapts scenarios from prior agent-safety benchmarks (including ATBench and LPS-Bench) and supplements them with hand-designed cases tailored to Clawdbot's tool surface. We log complete interaction trajectories (messages, actions, tool-call arguments/outputs) and assess safety using both an automated trajectory judge (AgentDoG-Qwen3-4B) and human review. Across 34 canonical cases, we find a non-uniform safety profile: performance is generally consistent on reliability-focused tasks, while most failures arise under underspecified intent, open-ended goals, or benign-seeming jailbreak prompts, where minor misinterpretations can escalate into higher-impact tool actions. We supplemented the overall results with representative case studies and summarized the commonalities of these cases, analyzing the security vulnerabilities and typical failure modes that Clawdbot is prone to trigger in practice.


Key Contributions

  • First systematic trajectory-centric safety evaluation of Clawdbot covering six risk dimensions including prompt injection, jailbreaking, and excessive tool-action risks
  • Curated test suite combining adapted scenarios from ATBench and LPS-Bench with hand-designed cases targeting Clawdbot's specific tool surface
  • Failure mode analysis showing that most safety failures arise under underspecified intent or jailbreak prompts where tool fan-out amplifies minor misinterpretations into high-impact irreversible actions

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timedigital
Datasets
ATBenchLPS-Bench
Applications
personal ai agentstool-using agentsself-hosted ai assistants