Chaowei Xiao

h-index: 5 348 citations 12 papers (total)

Papers in Database (1)

defense arXiv Nov 30, 2025 · Nov 2025

Mitigating Indirect Prompt Injection via Instruction-Following Intent Analysis

Mintong Kang, Chong Xiang, Sanjay Kariyappa et al. · NVIDIA · University of Illinois Urbana-Champaign +1 more

Defends LLM agents against indirect prompt injection by analyzing whether the model intends to follow untrusted instructions, cutting attack success from 100% to 8.5%

Prompt Injection nlp
1 citations PDF