α

Published on arXiv

2602.20021

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Autonomous LLM agents deployed with real tools in a live laboratory exhibit critical and diverse security failures — including destructive system-level actions, sensitive information disclosure, denial-of-service, identity spoofing, and partial system takeover — across realistic multi-party deployment conditions.


We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies. Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. This report serves as an initial empirical contribution to that broader conversation.


Key Contributions

  • Empirical red-teaming study of autonomous LLM agents in a realistic live environment with persistent memory, email, Discord, shell, and file system access, over two weeks with 20 AI researchers under both benign and adversarial conditions
  • Documentation of 11 representative case studies spanning unauthorized compliance, sensitive information disclosure, destructive system actions, DoS, identity spoofing, cross-agent propagation of unsafe behaviors, and partial system takeover
  • Establishes empirical evidence of security-, privacy-, and governance-relevant vulnerabilities in realistic agentic deployment settings and raises open questions about accountability, delegated authority, and downstream harm responsibility

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxgrey_boxinference_time
Applications
autonomous llm agentsmulti-agent systems