Agents of Chaos
Natalie Shapira 1, Chris Wendler 1, Avery Yen 1, Gabriele Sarti 1, Koyena Pal 1, Olivia Floody 2, Adam Belfki 1, Alex Loftus 1, Aditya Ratan Jannali 2, Nikhil Prakash 1, Jasmine Cui 1, Giordano Rogers 1, Jannik Brinkmann 1, Can Rager 2, Amir Zur 3, Michael Ripa 1, Aruna Sankaranarayanan 4, David Atkinson 1, Rohit Gandikota 1, Jaden Fiotto-Kaufman 1, EunJeong Hwang 5,6, Hadas Orgad 7, P Sam Sahil 2, Negev Taglicht 2, Tomer Shabtay 2, Atai Ambus 2, Nitay Alon 8,9, Shiri Oron 2, Ayelet Gordon-Tapiero 8, Yotam Kaplan 8, Vered Shwartz 5,6, Tamar Rott Shaham 4, Christoph Riedl 1, Reuth Mirsky 10, Maarten Sap 11, David Manheim 12,13, Tomer Ullman 7, David Bau 1
4 MIT
5 University of British Columbia
9 Max Planck Institute for Biological Cybernetics
12 Alter
13 Technion
Published on arXiv
2602.20021
Excessive Agency
OWASP LLM Top 10 — LLM08
Prompt Injection
OWASP LLM Top 10 — LLM01
Insecure Plugin Design
OWASP LLM Top 10 — LLM07
Key Finding
Autonomous LLM agents deployed with real tools in a live laboratory exhibit critical and diverse security failures — including destructive system-level actions, sensitive information disclosure, denial-of-service, identity spoofing, and partial system takeover — across realistic multi-party deployment conditions.
We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies. Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. This report serves as an initial empirical contribution to that broader conversation.
Key Contributions
- Empirical red-teaming study of autonomous LLM agents in a realistic live environment with persistent memory, email, Discord, shell, and file system access, over two weeks with 20 AI researchers under both benign and adversarial conditions
- Documentation of 11 representative case studies spanning unauthorized compliance, sensitive information disclosure, destructive system actions, DoS, identity spoofing, cross-agent propagation of unsafe behaviors, and partial system takeover
- Establishes empirical evidence of security-, privacy-, and governance-relevant vulnerabilities in realistic agentic deployment settings and raises open questions about accountability, delegated authority, and downstream harm responsibility