survey 2026

SoK: The Attack Surface of Agentic AI -- Tools, and Autonomy

Ali Dehghantanha 1,2, Sajad Homayoun 2

0 citations

α

Published on arXiv

2603.22928

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Synthesizes 20+ studies from 2023-2025 revealing agentic systems introduce qualitatively different attack surfaces including indirect prompt injection, RAG index poisoning, and cross-agent manipulation beyond traditional LLM threats


Recent AI systems combine large language models with tools, external knowledge via retrieval-augmented generation (RAG), and even autonomous multi-agent decision loops. This agentic AI paradigm greatly expands capabilities - but also vastly enlarges the attack surface. In this systematization, we map out the trust boundaries and security risks of agentic LLM-based systems. We develop a comprehensive taxonomy of attacks spanning prompt-level injections, knowledge-base poisoning, tool/plug-in exploits, and multi-agent emergent threats. Through a detailed literature review, we synthesize evidence from 2023-2025, including more than 20 peer-reviewed and archival studies, industry reports, and standards. We find that agentic systems introduce new vectors for indirect prompt injection, code execution exploits, RAG index poisoning, and cross-agent manipulation that go beyond traditional AI threats. We define attacker models and threat scenarios, and propose metrics (e.g., Unsafe Action Rate, Privilege Escalation Distance) to evaluate security posture. Our survey examines defenses such as input sanitization, retrieval filters, sandboxes, access control, and "AI guardrails," assessing their effectiveness and pointing out the areas where protection is still lacking. To assist practitioners, we outline defensive controls and provide a phased security checklist for deploying agentic AI (covering design-time hardening, runtime monitoring, and incident response). Finally, we outline open research challenges in secure autonomous AI (robust tool APIs, verifiable agent behavior, supply-chain safeguards) and discuss ethical and responsible disclosure practices. We systematize recent findings to help researchers and engineers understand and mitigate security risks in agentic AI.


Key Contributions

  • Comprehensive taxonomy of agentic AI attacks spanning prompt injection, RAG poisoning, tool exploits, and multi-agent threats mapped to OWASP GenAI and MITRE ATLAS
  • Attacker-aware security metrics (UAR, PED, RRS, etc.) and evaluation framework for measuring agentic system security posture
  • Defense-in-depth framework and practitioner playbook covering design-time hardening, runtime monitoring, and incident response for agentic AI deployment

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
agentic ai systemsllm tool useretrieval-augmented generationautonomous agentsmulti-agent systems