benchmark 2025

Beyond Model Jailbreak: Systematic Dissection of the "Ten DeadlySins" in Embodied Intelligence

Yuhang Huang 1, Junchao Li 1, Boyang Ma 1, Xuelong Dai 1, Minghui Xu 1, Kaidi Xu 1, Yue Zhang 1, Jianping Wang 2, Xiuzhen Cheng 1

0 citations · 29 references · arXiv

α

Published on arXiv

2512.06387

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Ten cross-layer vulnerabilities in the Unitree Go2 platform collectively enable adversaries to hijack the device, inject arbitrary commands, and gain full physical control, demonstrating that model alignment alone is insufficient for embodied AI security.

Ten Sins of Embodied AI Security

Novel technique introduced


Embodied AI systems integrate language models with real world sensing, mobility, and cloud connected mobile apps. Yet while model jailbreaks have drawn significant attention, the broader system stack of embodied intelligence remains largely unexplored. In this work, we conduct the first holistic security analysis of the Unitree Go2 platform and uncover ten cross layer vulnerabilities the "Ten Sins of Embodied AI Security." Using BLE sniffing, traffic interception, APK reverse engineering, cloud API testing, and hardware probing, we identify systemic weaknesses across three architectural layers: wireless provisioning, core modules, and external interfaces. These include hard coded keys, predictable handshake tokens, WiFi credential leakage, missing TLS validation, static SSH password, multilingual safety bypass behavior, insecure local relay channels, weak binding logic, and unrestricted firmware access. Together, they allow adversaries to hijack devices, inject arbitrary commands, extract sensitive information, or gain full physical control.Our findings show that securing embodied AI requires far more than aligning the model itself. We conclude with system level lessons learned and recommendations for building embodied platforms that remain robust across their entire software hardware ecosystem.


Key Contributions

  • First holistic cross-layer security analysis of the Unitree Go2 embodied AI platform, identifying ten vulnerabilities spanning wireless provisioning, core modules, and external interfaces.
  • Documents a multilingual LLM safety-bypass attack showing that non-English commands circumvent the platform's alignment guardrails.
  • Provides system-level lessons and recommendations for securing the full software–hardware stack of embodied AI platforms beyond model alignment alone.

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
embodied airobotic platformsautonomous agents