survey 2025

Safety of Embodied Navigation: A Survey

Zixia Wang , Jia Hu , Ronghui Mu

0 citations

α

Published on arXiv

2508.05855

Input Manipulation Attack

OWASP ML Top 10 — ML01

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Identifies and categorizes safety challenges in LLM-powered embodied navigation from attack, defense, and evaluation perspectives, highlighting unresolved gaps and future research directions


As large language models (LLMs) continue to advance and gain influence, the development of embodied AI has accelerated, drawing significant attention, particularly in navigation scenarios. Embodied navigation requires an agent to perceive, interact with, and adapt to its environment while moving toward a specified target in unfamiliar settings. However, the integration of embodied navigation into critical applications raises substantial safety concerns. Given their deployment in dynamic, real-world environments, ensuring the safety of such systems is critical. This survey provides a comprehensive analysis of safety in embodied navigation from multiple perspectives, encompassing attack strategies, defense mechanisms, and evaluation methodologies. Beyond conducting a comprehensive examination of existing safety challenges, mitigation technologies, and various datasets and metrics that assess effectiveness and robustness, we explore unresolved issues and future research directions in embodied navigation safety. These include potential attack methods, mitigation strategies, more reliable evaluation techniques, and the implementation of verification frameworks. By addressing these critical gaps, this survey aims to provide valuable insights that can guide future research toward the development of safer and more reliable embodied navigation systems. Furthermore, the findings of this study have broader implications for enhancing societal safety and increasing industrial efficiency.


Key Contributions

  • Comprehensive taxonomy of attack strategies against embodied navigation systems, including adversarial, backdoor, and prompt-level attacks
  • Systematic review of defense mechanisms and mitigation technologies for embodied navigation safety
  • Survey of evaluation methodologies, datasets, and metrics assessing robustness and safety of embodied navigation agents

🛡️ Threat Analysis

Input Manipulation Attack

Covers adversarial input manipulation attacks on embodied navigation systems, including physical adversarial perturbations targeting visual perception and sensor inputs that cause navigation failures.

Model Poisoning

Survey explicitly covers backdoor/trojan attack strategies against embodied navigation models, where hidden triggers manipulate agent behavior in specific environments.


Details

Domains
multimodalreinforcement-learningnlp
Model Types
llmvlmmultimodalrl
Threat Tags
white_boxblack_boxtraining_timeinference_timephysicaldigital
Applications
embodied navigationautonomous agentsrobot navigationllm-based navigation agents