Yu-Gang Jiang

h-index: 14 1,147 citations 48 papers (total)

Papers in Database (7)

attack AAAI Jan 2, 2025 · Jan 2025

AIM: Additional Image Guided Generation of Transferable Adversarial Attacks

Teng Li, Xingjun Ma, Yu-Gang Jiang · Fudan University

Generative adversarial attack using image-guided semantic injection to improve targeted transferability across black-box models

Input Manipulation Attack vision
5 citations 1 influentialPDF Code
attack arXiv Oct 13, 2025 · Oct 2025

TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models

Zonghuan Xu, Jiayu Li, Yunhan Zhao et al. · Fudan University · City University of Hong Kong

Backdoor attack on VLA robots forces action primitives (e.g., open_gripper) via visual triggers with under 1% data poisoning

Model Poisoning multimodalreinforcement-learning
2 citations PDF
benchmark arXiv Nov 15, 2025 · Nov 2025

AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models

Jiayu Li, Yunhan Zhao, Xiang Zheng et al. · Fudan University · City University of Hong Kong +1 more

Benchmarks adversarial and backdoor attacks on robotic VLA models; introduces BackdoorVLA for precise long-horizon targeted manipulation with 100% success on select tasks

Input Manipulation Attack Model Poisoning visionmultimodalreinforcement-learning
1 citations PDF
attack arXiv Sep 24, 2025 · Sep 2025

FreezeVLA: Action-Freezing Attacks against Vision-Language-Action Models

Xin Wang, Jie Li, Zejia Weng et al. · Fudan University · Shanghai AI Lab +1 more

Adversarial image attack freezes Vision-Language-Action robotic models via bi-level optimization, achieving 76.2% cross-prompt success rate

Input Manipulation Attack Prompt Injection visionmultimodalnlp
1 citations 1 influentialPDF Code
benchmark arXiv Jan 15, 2026 · 11w ago

A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5

Xingjun Ma, Yixu Wang, Hengyuan Xu et al. · Fudan University · Shanghai Innovation Institute +2 more

Benchmarks six frontier LLMs/VLMs on adversarial, multilingual, and compliance safety, revealing all collapse below 6% worst-case safety rates

Prompt Injection nlpmultimodalvisiongenerative
1 citations PDF
attack arXiv Jan 29, 2026 · 9w ago

Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs

Xiang Zheng, Yutao Wu, Hanxun Huang et al. · City University of Hong Kong · Deakin University +4 more

Self-evolving agent framework extracts hidden system prompts from 41 commercial LLMs using UCB-guided natural language probing strategies

Sensitive Information Disclosure Prompt Injection nlp
PDF
benchmark arXiv Nov 24, 2025 · Nov 2025

BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models

Juncheng Li, Yige Li, Hanxun Huang et al. · Fudan University · Singapore Management University +1 more

Benchmarks backdoor attacks on VLMs, finding text triggers achieve 90%+ success at just 1% poisoning rate

Model Poisoning visionnlpmultimodal
PDF Code