benchmark 2026

AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems

Faouzi El Yagoubi , Godwin Badu-Marfo , Ranwa Al Mallah

0 citations · 34 references · arXiv (Cornell University)

α

Published on arXiv

2602.11510

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Inter-agent messages (C2) leak sensitive data at 68.8% versus 27.2% on the output channel, meaning output-only audits miss 41.7% of privacy violations in multi-agent LLM deployments.

AgentLeak

Novel technique introduced


Multi-agent Large Language Model (LLM) systems create privacy risks that current benchmarks cannot measure. When agents coordinate on tasks, sensitive data passes through inter-agent messages, shared memory, and tool arguments; pathways that output-only audits never inspect. We introduce AgentLeak, to the best of our knowledge the first full-stack benchmark for privacy leakage covering internal channels, spanning 1,000 scenarios across healthcare, finance, legal, and corporate domains, paired with a 32-class attack taxonomy and three-tier detection pipeline. Testing GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, Mistral Large, and Llama 3.3 70B across 4,979 traces reveals that multi-agent configurations reduce per-channel output leakage (C1: 27.2% vs 43.2% in single-agent) but introduce unmonitored internal channels that raise total system exposure to 68.9% (OR-aggregated across C1, C2, C5). Internal channels account for most of this gap: inter-agent messages (C2) leak at 68.8%, compared to 27.2% on C1 (output channel). This means that output-only audits miss 41.7% of violations. Claude 3.5 Sonnet, which emphasizes safety alignment in its design, achieves the lowest leakage rates on both external (3.3%) and internal (28.1%) channels, suggesting that model-level safety training may transfer to internal channel protection. Across all five models and four domains, the pattern C2 > C1 holds consistently, confirming that inter-agent communication is the primary vulnerability. These findings underscore the need for coordination frameworks that incorporate internal-channel privacy protections and enforce privacy controls on inter-agent communication.


Key Contributions

  • AgentLeak benchmark with 1,000 scenarios across 4 domains and 7 leakage channels (including unmonitored internal channels), paired with a 32-class attack taxonomy
  • Empirical finding that multi-agent configurations raise total system privacy exposure to 68.9% while output-only audits miss 41.7% of violations, with inter-agent messages leaking at 68.8%
  • Three-tier detection pipeline (canary matching, pattern extraction, LLM-as-judge) and Pareto analysis showing current defenses cannot simultaneously preserve task utility and internal-channel privacy

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
AgentLeak (1,000 custom scenarios across healthcare, finance, legal, corporate domains)
Applications
multi-agent llm systemshealthcare schedulingfinancial compliancelegal discovery