benchmark 2025

The Trust Paradox in LLM-Based Multi-Agent Systems: When Collaboration Becomes a Security Vulnerability

Zijie Xu 1, Minfeng Qi 2, Shiqing Wu 2, Lefeng Zhang 2, Qiwen Wei 1, Han He 2, Ningran Li 3

2 citations · 47 references · arXiv

α

Published on arXiv

2510.18563

Excessive Agency

OWASP LLM Top 10 — LLM08

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Higher inter-agent trust (τ=0.9 vs. τ=0.1) consistently improves task success but also significantly increases Over-Exposure Rate and Authorization Drift across all tested LLM backends and orchestration frameworks, with heterogeneous trust-to-risk mappings across systems.

Trust-Vulnerability Paradox (TVP)

Novel technique introduced


Multi-agent systems powered by large language models are advancing rapidly, yet the tension between mutual trust and security remains underexplored. We introduce and empirically validate the Trust-Vulnerability Paradox (TVP): increasing inter-agent trust to enhance coordination simultaneously expands risks of over-exposure and over-authorization. To investigate this paradox, we construct a scenario-game dataset spanning 3 macro scenes and 19 sub-scenes, and run extensive closed-loop interactions with trust explicitly parameterized. Using Minimum Necessary Information (MNI) as the safety baseline, we propose two unified metrics: Over-Exposure Rate (OER) to detect boundary violations, and Authorization Drift (AD) to capture sensitivity to trust levels. Results across multiple model backends and orchestration frameworks reveal consistent trends: higher trust improves task success but also heightens exposure risks, with heterogeneous trust-to-risk mappings across systems. We further examine defenses such as Sensitive Information Repartitioning and Guardian-Agent enablement, both of which reduce OER and attenuate AD. Overall, this study formalizes TVP, establishes reproducible baselines with unified metrics, and demonstrates that trust must be modeled and scheduled as a first-class security variable in multi-agent system design.


Key Contributions

  • Formalizes the Trust-Vulnerability Paradox (TVP): parameterizing inter-agent trust (τ) and empirically demonstrating that higher trust consistently raises both task success and security risk across 1,488 closed-loop interaction chains
  • Proposes two unified metrics — Over-Exposure Rate (OER) for information boundary violations and Authorization Drift (AD) for sensitivity to trust level — with a scenario-game dataset spanning 3 macro scenes and 19 sub-scenes
  • Evaluates defenses (Sensitive Information Repartitioning and Guardian-Agent enablement) showing both reduce OER and attenuate AD across multiple LLM backends and orchestration frameworks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
Scenario-game dataset (3 macro scenes, 19 sub-scenes, 1488 interaction chains — constructed by authors)
Applications
llm multi-agent systemsagentic ai orchestration frameworks