defense 2026

VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

Junda Lin 1, Zhaomeng Zhou 1, Zhi Zheng 1, Shuochen Liu 1, Tong Xu 1, Yong Chen 2, Enhong Chen 1

1 citations · 33 references · arXiv

α

Published on arXiv

2601.05755

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

VIGIL reduces attack success rate by over 22% vs. state-of-the-art dynamic defenses while more than doubling utility under attack compared to static baselines.

VIGIL

Novel technique introduced


LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility.


Key Contributions

  • VIGIL: a verify-before-commit framework that uses speculative hypothesis generation and intent-grounded verification to defend LLM agents against tool stream injection without severing adaptive reasoning feedback loops
  • SIREN: a benchmark of 959 tool stream injection cases simulating pervasive indirect prompt injection threats with dynamic dependencies
  • Empirical demonstration of 22%+ reduction in attack success rate vs. SOTA dynamic defenses and 2x+ utility improvement vs. static baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
SIREN
Applications
llm agentstool-augmented llm systems