benchmark arXiv Feb 16, 2026 · 7w ago
Tianyu Chen, Dongrui Liu, Xia Hu et al. · ShanghaiTech University · Shanghai Artificial Intelligence Laboratory
Trajectory-based safety audit of Clawdbot AI agent revealing jailbreak and excessive tool-action failures across 34 test cases
Prompt Injection Excessive Agency nlp
Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises heightened safety and security concerns under ambiguity and adversarial steering. We present a trajectory-centric evaluation of Clawdbot across six risk dimensions. Our test suite samples and lightly adapts scenarios from prior agent-safety benchmarks (including ATBench and LPS-Bench) and supplements them with hand-designed cases tailored to Clawdbot's tool surface. We log complete interaction trajectories (messages, actions, tool-call arguments/outputs) and assess safety using both an automated trajectory judge (AgentDoG-Qwen3-4B) and human review. Across 34 canonical cases, we find a non-uniform safety profile: performance is generally consistent on reliability-focused tasks, while most failures arise under underspecified intent, open-ended goals, or benign-seeming jailbreak prompts, where minor misinterpretations can escalate into higher-impact tool actions. We supplemented the overall results with representative case studies and summarized the commonalities of these cases, analyzing the security vulnerabilities and typical failure modes that Clawdbot is prone to trigger in practice.
llm ShanghaiTech University · Shanghai Artificial Intelligence Laboratory
defense arXiv Mar 18, 2026 · 19d ago
Zhihua Wei, Qiang Li, Jian Ruan et al. · Tongji University · Shanghai Artificial Intelligence Laboratory
Proposes JRS-Rem defense that prevents VLM jailbreaks by removing image-induced representation shifts toward jailbreak states at inference time
Input Manipulation Attack Prompt Injection multimodalvisionnlp
Large vision-language models (VLMs) often exhibit weakened safety alignment with the integration of the visual modality. Even when text prompts contain explicit harmful intent, adding an image can substantially increase jailbreak success rates. In this paper, we observe that VLMs can clearly distinguish benign inputs from harmful ones in their representation space. Moreover, even among harmful inputs, jailbreak samples form a distinct internal state that is separable from refusal samples. These observations suggest that jailbreaks do not arise from a failure to recognize harmful intent. Instead, the visual modality shifts representations toward a specific jailbreak state, thereby leading to a failure to trigger refusal. To quantify this transition, we identify a jailbreak direction and define the jailbreak-related shift as the component of the image-induced representation shift along this direction. Our analysis shows that the jailbreak-related shift reliably characterizes jailbreak behavior, providing a unified explanation for diverse jailbreak scenarios. Finally, we propose a defense method that enhances VLM safety by removing the jailbreak-related shift (JRS-Rem) at inference time. Experiments show that JRS-Rem provides strong defense across multiple scenarios while preserving performance on benign tasks.
vlm transformer multimodal Tongji University · Shanghai Artificial Intelligence Laboratory