Yixu Wang

h-index: 9 315 citations 35 papers (total)

Papers in Database (7)

defense AAAI Jan 2, 2025 · Jan 2025

HoneypotNet: Backdoor Attacks Against Model Extraction

Yixu Wang, Tianle Gu, Yan Teng et al. · Fudan University · Shanghai Artificial Intelligence Laboratory +1 more

Defends against model extraction by backdoor-poisoning substitute models via a honeypot classification layer and bi-level optimization

Model Theft Model Poisoning vision
4 citations 1 influentialPDF
tool arXiv Jan 4, 2026 · Jan 2026

OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs

Xin Wang, Yunhao Chen, Juncheng Li et al. · Shanghai Artificial Intelligence Laboratory

Open-source MLLM red-teaming framework integrating 37 attacks, revealing up to 49% ASR on frontier models including GPT-5.2 and Claude 4.5

Input Manipulation Attack Prompt Injection nlpmultimodalvision
4 citations 1 influentialPDF Code
benchmark arXiv Jan 15, 2026 · 11w ago

A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5

Xingjun Ma, Yixu Wang, Hengyuan Xu et al. · Fudan University · Shanghai Innovation Institute +2 more

Benchmarks six frontier LLMs/VLMs on adversarial, multilingual, and compliance safety, revealing all collapse below 6% worst-case safety rates

Prompt Injection nlpmultimodalvisiongenerative
1 citations PDF
attack arXiv Sep 24, 2025 · Sep 2025

FreezeVLA: Action-Freezing Attacks against Vision-Language-Action Models

Xin Wang, Jie Li, Zejia Weng et al. · Fudan University · Shanghai AI Lab +1 more

Adversarial image attack freezes Vision-Language-Action robotic models via bi-level optimization, achieving 76.2% cross-prompt success rate

Input Manipulation Attack Prompt Injection visionmultimodalnlp
1 citations 1 influentialPDF Code
attack arXiv Nov 16, 2025 · Nov 2025

Evolve the Method, Not the Prompts: Evolutionary Synthesis of Jailbreak Attacks on LLMs

Yunhao Chen, Xin Wang, Juncheng Li et al. · Fudan University · Shanghai Artificial Intelligence Laboratory

Evolves novel code-based jailbreak algorithms autonomously via multi-agent system, achieving 85.5% ASR on Claude-Sonnet-4.5

Prompt Injection nlp
1 citations PDF Code
attack arXiv Sep 28, 2025 · Sep 2025

StolenLoRA: Exploring LoRA Extraction Attacks via Synthetic Data

Yixu Wang, Yan Teng, Yingchun Wang et al. · Fudan University · Shanghai Artificial Intelligence Laboratory

Black-box extraction attack steals LoRA-adapted vision models using LLM-generated synthetic data, achieving 96.6% success with 10k queries

Model Theft visionnlp
PDF
benchmark arXiv Nov 24, 2025 · Nov 2025

BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models

Juncheng Li, Yige Li, Hanxun Huang et al. · Fudan University · Singapore Management University +1 more

Benchmarks backdoor attacks on VLMs, finding text triggers achieve 90%+ success at just 1% poisoning rate

Model Poisoning visionnlpmultimodal
PDF Code