survey 2025

Security Risks of Agentic Vehicles: A Systematic Analysis of Cognitive and Cross-Layer Threats

Ali Eslami , Jiangbo Yu

3 citations · 1 influential · 27 references · arXiv

α

Published on arXiv

2512.17041

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Provides the first structured framework unifying OWASP-style agentic AI threats with cross-layer CAV attack chains, demonstrating how subtle perception or communication distortions can propagate through agentic reasoning into unsafe vehicle behavior.


Agentic AI is increasingly being explored and introduced in both manually driven and autonomous vehicles, leading to the notion of Agentic Vehicles (AgVs), with capabilities such as memory-based personalization, goal interpretation, strategic reasoning, and tool-mediated assistance. While frameworks such as the OWASP Agentic AI Security Risks highlight vulnerabilities in reasoning-driven AI systems, they are not designed for safety-critical cyber-physical platforms such as vehicles, nor do they account for interactions with other layers such as perception, communication, and control layers. This paper investigates security threats in AgVs, including OWASP-style risks and cyber-attacks from other layers affecting the agentic layer. By introducing a role-based architecture for agentic vehicles, consisting of a Personal Agent and a Driving Strategy Agent, we will investigate vulnerabilities in both agentic AI layer and cross-layer risks, including risks originating from upstream layers (e.g., perception layer, control layer, etc.). A severity matrix and attack-chain analysis illustrate how small distortions can escalate into misaligned or unsafe behavior in both human-driven and autonomous vehicles. The resulting framework provides the first structured foundation for analyzing security risks of agentic AI in both current and emerging vehicle platforms.


Key Contributions

  • Role-based agentic vehicle architecture decomposing responsibility into a Personal Agent and Driving Strategy Agent with defined trust boundaries
  • Structured threat taxonomy covering cognitive vulnerabilities (memory poisoning, intent manipulation, tool misuse, goal hijacking) and cross-layer attack propagation from perception/communication/control layers
  • Severity matrix and attack-chain analysis showing how small upstream distortions escalate into misaligned or unsafe agentic driving behavior

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlmmultimodal
Threat Tags
inference_timetargeted
Applications
autonomous vehiclesconnected vehiclesagentic ai systems