defense 2025

MindGuard: Intrinsic Decision Inspection for Securing LLM Agents Against Metadata Poisoning

Zhiqiang Wang , Haohua Du , Guanquan Shi , Junyang Zhang , HaoRan Cheng , Yunhao Yao , Kaiwen Guo , Xiang-Yang Li

0 citations

α

Published on arXiv

2508.20412

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MindGuard achieves over 97.6% average precision in detecting poisoned tool invocations and over 98.6% attribution accuracy with sub-second processing time and no additional token overhead, outperforming existing SOTA defenses.

MindGuard / Decision Dependency Graph (DDG)

Novel technique introduced


The Model Context Protocol (MCP) is increasingly adopted to standardize the interaction between LLM agents and external tools. However, this trend introduces a new threat: Tool Poisoning Attacks (TPA), where tool metadata is poisoned to induce the agent to perform unauthorized operations. Existing defenses that primarily focus on behavior-level analysis are fundamentally ineffective against TPA, as poisoned tools need not be executed, leaving no behavioral trace to monitor. Thus, we propose MindGuard, a decision-level guardrail for LLM agents, providing provenance tracking of call decisions, policy-agnostic detection, and poisoning source attribution against TPA. While fully explaining LLM decision remains challenging, our empirical findings uncover a strong correlation between LLM attention mechanisms and tool invocation decisions. Therefore, we choose attention as an empirical signal for decision tracking and formalize this as the Decision Dependence Graph (DDG), which models the LLM's reasoning process as a weighted, directed graph where vertices represent logical concepts and edges quantify the attention-based dependencies. We further design robust DDG construction and graph-based anomaly analysis mechanisms that efficiently detect and attribute TPA attacks. Extensive experiments on real-world datasets demonstrate that MindGuard achieves 94\%-99\% average precision in detecting poisoned invocations, 95\%-100\% attribution accuracy, with processing times under one second and no additional token cost. Moreover, DDG can be viewed as an adaptation of the classical Program Dependence Graph (PDG), providing a solid foundation for applying traditional security policies at the decision level.


Key Contributions

  • Decision Inspection paradigm: a novel security approach that scrutinizes the LLM's internal attention-based reasoning to verify tool-call decision provenance rather than relying on pre-decision isolation or post-decision behavioral monitoring
  • Decision Dependency Graph (DDG): a weighted directed graph built from LLM attention weights that models how the agent's tool-call decision depends on user query vs. tool metadata, enabling policy-agnostic anomaly detection
  • MindGuard implementation: a self-inspection plugin achieving >97.6% average precision in TPA detection and >98.6% attribution accuracy with sub-second latency and zero additional token cost

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timegrey_box
Datasets
multiple real-world MCP/tool-use datasets (names not specified in excerpt)
Applications
llm agentsmcp tool-calling systemsautonomous ai agents