benchmark 2025

TAMAS: Benchmarking Adversarial Risks in Multi-Agent LLM Systems

Ishan Kavathekar 1, Hemang Jain 1, Ameya Rathod 1, Ponnurangam Kumaraguru 1, Tanuja Ganu 2

0 citations · 32 references · arXiv

α

Published on arXiv

2511.05269

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Multi-agent LLM systems are highly vulnerable to adversarial attacks across all ten tested backbone LLMs and both agent frameworks, with safety-utility tradeoffs captured by the proposed ERS metric.

TAMAS (Threats and Attacks in Multi-Agent Systems)

Novel technique introduced


Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities, leading to their widespread adoption across diverse tasks. As task complexity grows, multi-agent LLM systems are increasingly used to solve problems collaboratively. However, safety and security of these systems remains largely under-explored. Existing benchmarks and datasets predominantly focus on single-agent settings, failing to capture the unique vulnerabilities of multi-agent dynamics and co-ordination. To address this gap, we introduce $\textbf{T}$hreats and $\textbf{A}$ttacks in $\textbf{M}$ulti-$\textbf{A}$gent $\textbf{S}$ystems ($\textbf{TAMAS}$), a benchmark designed to evaluate the robustness and safety of multi-agent LLM systems. TAMAS includes five distinct scenarios comprising 300 adversarial instances across six attack types and 211 tools, along with 100 harmless tasks. We assess system performance across ten backbone LLMs and three agent interaction configurations from Autogen and CrewAI frameworks, highlighting critical challenges and failure modes in current multi-agent deployments. Furthermore, we introduce Effective Robustness Score (ERS) to assess the tradeoff between safety and task effectiveness of these frameworks. Our findings show that multi-agent systems are highly vulnerable to adversarial attacks, underscoring the urgent need for stronger defenses. TAMAS provides a foundation for systematically studying and improving the safety of multi-agent LLM systems.


Key Contributions

  • TAMAS benchmark comprising 300 adversarial instances across six attack types and 211 tools in five multi-agent scenarios
  • Effective Robustness Score (ERS) metric quantifying the safety-utility tradeoff in multi-agent LLM frameworks
  • Empirical evaluation of ten backbone LLMs across three multi-agent configurations (Autogen, CrewAI) revealing widespread vulnerability

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
TAMAS (300 adversarial instances, 100 harmless tasks)
Applications
multi-agent llm systemsautonomous ai agents