defense 2026

Defense Against Indirect Prompt Injection via Tool Result Parsing

Qiang Yu , Xinran Cheng , Chuanyi Liu

3 citations · 39 references · arXiv

α

Published on arXiv

2601.04795

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Achieves the lowest Attack Success Rate (ASR) to date on AgentDojo while maintaining competitive Utility under Attack (UA), outperforming both model-based and prompt-based baselines across three LLMs.

Tool Result Parsing Defense

Novel technique introduced


As LLM agents transition from digital assistants to physical controllers in autonomous systems and robotics, they face an escalating threat from indirect prompt injection. By embedding adversarial instructions into the results of tool calls, attackers can hijack the agent's decision-making process to execute unauthorized actions. This vulnerability poses a significant risk as agents gain more direct control over physical environments. Existing defense mechanisms against Indirect Prompt Injection (IPI) generally fall into two categories. The first involves training dedicated detection models; however, this approach entails high computational overhead for both training and inference, and requires frequent updates to keep pace with evolving attack vectors. Alternatively, prompt-based methods leverage the inherent capabilities of LLMs to detect or ignore malicious instructions via prompt engineering. Despite their flexibility, most current prompt-based defenses suffer from high Attack Success Rates (ASR), demonstrating limited robustness against sophisticated injection attacks. In this paper, we propose a novel method that provides LLMs with precise data via tool result parsing while effectively filtering out injected malicious code. Our approach achieves competitive Utility under Attack (UA) while maintaining the lowest Attack Success Rate (ASR) to date, significantly outperforming existing methods. Code is available at GitHub.


Key Contributions

  • A prompt-based, training-free defense that parses and sanitizes tool call results to return only essential, format-validated data to the LLM, filtering adversarial injections
  • An additional detection and sanitization module for large text chunks that cannot be fully structured
  • Empirical evaluation on AgentDojo across gpt-oss-120b, llama-3.1-70b, and qwen3-32b demonstrating state-of-the-art lowest ASR with competitive utility under attack

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
AgentDojo
Applications
llm agentsautonomous systemsroboticsfunction-calling pipelines