Devang Kulshreshtha

h-index: 7 196 citations 15 papers (total)

Papers in Database (2)

attack arXiv Sep 30, 2025 · Sep 2025

STAC: When Innocent Tools Form Dangerous Chains to Jailbreak LLM Agents

Jing-Jing Li, Jianfeng He, Chao Shang et al. · AWS AI Labs · UC Berkeley

Multi-turn attack chains innocuous tool calls on LLM agents to achieve harmful goals, exceeding 90% ASR on GPT-4.1

Insecure Plugin Design Prompt Injection nlp
4 citations PDF Code
attack arXiv Jan 6, 2026 · Jan 2026

Multi-Turn Jailbreaking of Aligned LLMs via Lexical Anchor Tree Search

Devang Kulshreshtha, Hang Su, Chinmay Hegde et al. · Amazon · New York University +1 more

Attacker-LLM-free multi-turn jailbreak via lexical anchor injection achieves 97-100% ASR on GPT/Claude/Llama in ~6.4 queries

Prompt Injection nlp
PDF