Yi Zhang

h-index: 2 7 citations 2 papers (total)

Papers in Database (1)

attack arXiv Sep 30, 2025 · Sep 2025

STAC: When Innocent Tools Form Dangerous Chains to Jailbreak LLM Agents

Jing-Jing Li, Jianfeng He, Chao Shang et al. · AWS AI Labs · UC Berkeley

Multi-turn attack chains innocuous tool calls on LLM agents to achieve harmful goals, exceeding 90% ASR on GPT-4.1

Insecure Plugin Design Prompt Injection nlp
4 citations PDF Code