survey 2025

Agentic AI Security: Threats, Defenses, Evaluation, and Open Challenges

Anshuman Chhabra , Shrestha Datta , Shahriar Kabir Nahin , Prasant Mohapatra

8 citations · 3 influential · 267 references · arXiv

α

Published on arXiv

2510.23883

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Survey identifies prompt injection, insecure tool use, and excessive autonomy as the primary amplified risk vectors in agentic AI, and synthesizes current defenses while highlighting significant open challenges for secure-by-design agent architectures.


Agentic AI systems powered by large language models (LLMs) and endowed with planning, tool use, memory, and autonomy, are emerging as powerful, flexible platforms for automation. Their ability to autonomously execute tasks across web, software, and physical environments creates new and amplified security risks, distinct from both traditional AI safety and conventional software security. This survey outlines a taxonomy of threats specific to agentic AI, reviews recent benchmarks and evaluation methodologies, and discusses defense strategies from both technical and governance perspectives. We synthesize current research and highlight open challenges, aiming to support the development of secure-by-design agent systems.


Key Contributions

  • Taxonomy of security threats specific to agentic AI systems, distinct from traditional AI safety and conventional software security
  • Review of benchmarks and evaluation methodologies for assessing agentic AI security
  • Synthesis of technical and governance defense strategies, with identification of open research challenges for secure-by-design agent systems

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmmultimodal
Threat Tags
inference_timeblack_box
Applications
autonomous agentsmulti-agent systemsai coding assistantsrobotic systemsenterprise automation