α

Published on arXiv

2511.10949

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Centralized MAS architectures that decompose harmful requests into atomic sub-instructions obscure malicious objectives from sub-agents, significantly reducing system-level robustness against adversarial prompting.

SafeAgents / Dharma

Novel technique introduced


LLM-based agents are increasingly deployed in multi-agent systems (MAS). As these systems move toward real-world applications, their security becomes paramount. Existing research largely evaluates single-agent security, leaving a critical gap in understanding the vulnerabilities introduced by multi-agent design. However, existing systems fall short due to lack of unified frameworks and metrics focusing on unique rejection modes in MAS. We present SafeAgents, a unified and extensible framework for fine-grained security assessment of MAS. SafeAgents systematically exposes how design choices such as plan construction strategies, inter-agent context sharing, and fallback behaviors affect susceptibility to adversarial prompting. We introduce Dharma, a diagnostic measure that helps identify weak links within multi-agent pipelines. Using SafeAgents, we conduct a comprehensive study across five widely adopted multi-agent architectures (centralized, decentralized, and hybrid variants) on four datasets spanning web tasks, tool use, and code generation. Our findings reveal that common design patterns carry significant vulnerabilities. For example, centralized systems that delegate only atomic instructions to sub-agents obscure harmful objectives, reducing robustness. Our results highlight the need for security-aware design in MAS. Link to code is https://github.com/microsoft/SafeAgents


Key Contributions

  • SafeAgents: a unified, extensible framework for fine-grained security assessment of multi-agent LLM systems across centralized, decentralized, and hybrid architectures
  • Dharma: a diagnostic measure that pinpoints weak-link agents within multi-agent pipelines based on fine-grained rejection mode analysis
  • Empirical study across five MAS architectures on four datasets (web tasks, tool use, code generation) revealing that centralized atomic-instruction delegation significantly reduces adversarial robustness

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
web tasks benchmarktool-use benchmarkcode generation benchmark
Applications
multi-agent llm systemsweb task agentstool-use agentscode generation agents