defense 2026

AttriGuard: Defeating Indirect Prompt Injection in LLM Agents via Causal Attribution of Tool Invocations

Yu He 1, Haozhe Zhu 1, Yiming Li 2, Shuo Shao 1, Hongwei Yao 3, Zhihao Liu 1, Zhan Qin 1

0 citations

α

Published on arXiv

2603.10749

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

AttriGuard achieves 0% ASR under static indirect prompt injection attacks across four LLMs with negligible utility loss, and remains resilient under adaptive optimization-based attacks where leading defenses degrade significantly.

AttriGuard

Novel technique introduced


LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack execution. Most existing defenses treat IPI as an input-level semantic discrimination problem, which often fails to generalize to unseen payloads. We propose a new paradigm, action-level causal attribution, which secures agents by asking why a particular tool call is produced. The central goal is to distinguish tool calls supported by the user's intent from those causally driven by untrusted observations. We instantiate this paradigm with AttriGuard, a runtime defense based on parallel counterfactual tests. For each proposed tool call, AttriGuard verifies its necessity by re-executing the agent under a control-attenuated view of external observations. Technically, AttriGuard combines teacher-forced shadow replay to prevent attribution confounding, hierarchical control attenuation to suppress diverse control channels while preserving task-relevant information, and a fuzzy survival criterion that is robust to LLM stochasticity. Across four LLMs and two agent benchmarks, AttriGuard achieves 0% ASR under static attacks with negligible utility loss and moderate overhead. Importantly, it remains resilient under adaptive optimization-based attacks in settings where leading defenses degrade significantly.


Key Contributions

  • Proposes action-level causal attribution as a new paradigm for IPI defense, framing the problem as distinguishing user-intent-driven tool calls from injection-driven ones rather than semantic input discrimination
  • Instantiates the paradigm with AttriGuard, combining teacher-forced shadow replay, hierarchical control attenuation, and fuzzy survival criteria to run parallel counterfactual tests at runtime
  • Achieves 0% attack success rate under static IPI attacks across four LLMs and two agent benchmarks, and demonstrates strong resilience under adaptive optimization-based attacks where leading defenses degrade significantly

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
AgentDojo
Applications
llm agentsagentic ai systemstool-calling pipelines