attack 2026

OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage

Akshat Naik 1, Jay J Culligan 2, Yarin Gal 1, Philip Torr 1, Rahaf Aljundi 2, Alasdair Paren 1, Adel Bibi 1

0 citations · 42 references · arXiv (Cornell University)

α

Published on arXiv

2602.13477

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

A single indirect prompt injection compromises multiple agents in an orchestrator setup to exfiltrate sensitive data, with frontier reasoning and non-reasoning models both vulnerable even without attacker knowledge of system internals.

OMNI-LEAK

Novel technique introduced


As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a practical paradigm. Prior work has examined the safety and misuse risks associated with agents. However, much of this has focused on the single-agent case and/or setups missing basic engineering safeguards such as access control, revealing a scarcity of threat modeling in multi-agent systems. We investigate the security vulnerabilities of a popular multi-agent pattern known as the orchestrator setup, in which a central agent decomposes and delegates tasks to specialized agents. Through red-teaming a concrete setup representative of a likely future use case, we demonstrate a novel attack vector, OMNI-LEAK, that compromises several agents to leak sensitive data through a single indirect prompt injection, even in the \textit{presence of data access control}. We report the susceptibility of frontier models to different categories of attacks, finding that both reasoning and non-reasoning models are vulnerable, even when the attacker lacks insider knowledge of the implementation details. Our work highlights the importance of safety research to generalize from single-agent to multi-agent settings, in order to reduce the serious risks of real-world privacy breaches and financial losses and overall public trust in AI agents.


Key Contributions

  • Novel OMNI-LEAK attack demonstrating that a single indirect prompt injection can propagate through an orchestrator multi-agent network to leak sensitive data across multiple specialized agents
  • Red-teaming of frontier reasoning and non-reasoning models in a representative orchestrator setup with data access control, showing broad vulnerability even without insider knowledge of implementation details
  • Threat modeling framework extending single-agent safety research to multi-agent orchestrator architectures

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
multi-agent llm systemsai agent orchestrators