Yingchun Wang

h-index: 8 263 citations 34 papers (total)

Papers in Database (6)

defense AAAI Jan 2, 2025 · Jan 2025

HoneypotNet: Backdoor Attacks Against Model Extraction

Yixu Wang, Tianle Gu, Yan Teng et al. · Fudan University · Shanghai Artificial Intelligence Laboratory +1 more

Defends against model extraction by backdoor-poisoning substitute models via a honeypot classification layer and bi-level optimization

Model Theft Model Poisoning vision
4 citations 1 influentialPDF
tool arXiv Jan 4, 2026 · Jan 2026

OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs

Xin Wang, Yunhao Chen, Juncheng Li et al. · Shanghai Artificial Intelligence Laboratory

Open-source MLLM red-teaming framework integrating 37 attacks, revealing up to 49% ASR on frontier models including GPT-5.2 and Claude 4.5

Input Manipulation Attack Prompt Injection nlpmultimodalvision
4 citations 1 influentialPDF Code
benchmark arXiv Oct 23, 2025 · Oct 2025

GhostEI-Bench: Do Mobile Agents Resilience to Environmental Injection in Dynamic On-Device Environments?

Chiyu Chen, Xinhao Song, Yunkai Chai et al. · Shanghai Jiao Tong University · Shanghai Artificial Intelligence Laboratory +1 more

Benchmark evaluating VLM mobile agents against environmental injection attacks via adversarial UI overlays and spoofed notifications in Android emulators

Prompt Injection Excessive Agency multimodalvision
3 citations PDF Code
attack arXiv Nov 16, 2025 · Nov 2025

Evolve the Method, Not the Prompts: Evolutionary Synthesis of Jailbreak Attacks on LLMs

Yunhao Chen, Xin Wang, Juncheng Li et al. · Fudan University · Shanghai Artificial Intelligence Laboratory

Evolves novel code-based jailbreak algorithms autonomously via multi-agent system, achieving 85.5% ASR on Claude-Sonnet-4.5

Prompt Injection nlp
1 citations PDF Code
attack arXiv Sep 24, 2025 · Sep 2025

FreezeVLA: Action-Freezing Attacks against Vision-Language-Action Models

Xin Wang, Jie Li, Zejia Weng et al. · Fudan University · Shanghai AI Lab +1 more

Adversarial image attack freezes Vision-Language-Action robotic models via bi-level optimization, achieving 76.2% cross-prompt success rate

Input Manipulation Attack Prompt Injection visionmultimodalnlp
1 citations 1 influentialPDF Code
attack arXiv Sep 28, 2025 · Sep 2025

StolenLoRA: Exploring LoRA Extraction Attacks via Synthetic Data

Yixu Wang, Yan Teng, Yingchun Wang et al. · Fudan University · Shanghai Artificial Intelligence Laboratory

Black-box extraction attack steals LoRA-adapted vision models using LLM-generated synthetic data, achieving 96.6% success with 10k queries

Model Theft visionnlp
PDF