defense 2026

ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction

Che Wang 1,2, Fuyao Zhang 2, Jiaming Zhang 2, Ziqi Zhang 1, Yinghui Wang 3, Longtao Huang 4, Jianbo Gao 1, Zhong Chen 1, Wei Yang Bryan Lim

0 citations · 37 references · arXiv (Cornell University)

α

Published on arXiv

2602.20708

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ICON achieves 0.4% attack success rate matching commercial-grade detectors while yielding over 50% task utility gain over existing defenses on multiple LLM backbones including Qwen3.

ICON

Novel technique introduced


Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content hijack the agent's execution. Existing defenses typically rely on strict filtering or refusal mechanisms, which suffer from a critical limitation: over-refusal, prematurely terminating valid agentic workflows. We propose ICON, a probing-to-mitigation framework that neutralizes attacks while preserving task continuity. Our key insight is that IPI attacks leave distinct over-focusing signatures in the latent space. We introduce a Latent Space Trace Prober to detect attacks based on high intensity scores. Subsequently, a Mitigating Rectifier performs surgical attention steering that selectively manipulate adversarial query key dependencies while amplifying task relevant elements to restore the LLM's functional trajectory. Extensive evaluations on multiple backbones show that ICON achieves a competitive 0.4% ASR, matching commercial grade detectors, while yielding a over 50% task utility gain. Furthermore, ICON demonstrates robust Out of Distribution(OOD) generalization and extends effectively to multi-modal agents, establishing a superior balance between security and efficiency.


Key Contributions

  • Latent Space Trace Prober that detects IPI attacks by identifying over-focusing signatures in latent representations, without relying on surface-level pattern matching
  • Mitigating Rectifier that performs surgical attention steering — suppressing adversarial query-key dependencies while amplifying task-relevant elements — to neutralize attacks without terminating the agentic workflow
  • Demonstrates OOD generalization and extension to multi-modal agents, achieving 0.4% ASR with over 50% task utility gain compared to existing defenses

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlm
Threat Tags
inference_timeblack_box
Datasets
InjecAgent
Applications
llm agentsmulti-modal agentstool-calling systems