Dongrui Liu

Papers in Database (2)

benchmark arXiv Feb 16, 2026 · 7w ago

A Trajectory-Based Safety Audit of Clawdbot (OpenClaw)

Tianyu Chen, Dongrui Liu, Xia Hu et al. · ShanghaiTech University · Shanghai Artificial Intelligence Laboratory

Trajectory-based safety audit of Clawdbot AI agent revealing jailbreak and excessive tool-action failures across 34 test cases

Prompt Injection Excessive Agency nlp
PDF Code
defense arXiv Mar 18, 2026 · 19d ago

Understanding and Defending VLM Jailbreaks via Jailbreak-Related Representation Shift

Zhihua Wei, Qiang Li, Jian Ruan et al. · Tongji University · Shanghai Artificial Intelligence Laboratory

Proposes JRS-Rem defense that prevents VLM jailbreaks by removing image-induced representation shifts toward jailbreak states at inference time

Input Manipulation Attack Prompt Injection multimodalvisionnlp
PDF Code