Yunhan Zhao

h-index: 6 176 citations 10 papers (total)

Papers in Database (3)

attack arXiv Oct 13, 2025 · Oct 2025

TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models

Zonghuan Xu, Jiayu Li, Yunhan Zhao et al. · Fudan University · City University of Hong Kong

Backdoor attack on VLA robots forces action primitives (e.g., open_gripper) via visual triggers with under 1% data poisoning

Model Poisoning multimodalreinforcement-learning
2 citations PDF
benchmark arXiv Nov 15, 2025 · Nov 2025

AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models

Jiayu Li, Yunhan Zhao, Xiang Zheng et al. · Fudan University · City University of Hong Kong +1 more

Benchmarks adversarial and backdoor attacks on robotic VLA models; introduces BackdoorVLA for precise long-horizon targeted manipulation with 100% success on select tasks

Input Manipulation Attack Model Poisoning visionmultimodalreinforcement-learning
1 citations PDF
benchmark arXiv Jan 15, 2026 · 11w ago

A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5

Xingjun Ma, Yixu Wang, Hengyuan Xu et al. · Fudan University · Shanghai Innovation Institute +2 more

Benchmarks six frontier LLMs/VLMs on adversarial, multilingual, and compliance safety, revealing all collapse below 6% worst-case safety rates

Prompt Injection nlpmultimodalvisiongenerative
1 citations PDF