Xingjun Ma

Papers in Database (8)

defense arXiv Aug 5, 2025 · Aug 2025

T2UE: Generating Unlearnable Examples from Text Descriptions

Xingjun Ma, Hanxun Huang, Tianwei Song et al. · Fudan University · The University of Melbourne

Generates training-data-poisoning noise from text alone to protect personal images from unauthorized CLIP-style pre-training

Data Poisoning Attack visionnlpmultimodal
PDF
defense arXiv Sep 3, 2025 · Sep 2025

FedAPT: Federated Adversarial Prompt Tuning for Vision-Language Models

Kun Zhai, Siheng Chen, Xingjun Ma et al. · Fudan University · Shanghai Jiao Tong University

Defends federated VLM prompt tuning against adversarial attacks via class-aware prompt generation bridging the non-IID class information gap

Input Manipulation Attack visionmultimodalfederated-learning
PDF
attack arXiv Sep 16, 2025 · Sep 2025

Defense-to-Attack: Bypassing Weak Defenses Enables Stronger Jailbreaks in Vision-Language Models

Yunhan Zhao, Xiang Zheng, Xingjun Ma · Fudan University · City University of Hong Kong

Bimodal VLM jailbreak exploiting weak-defense patterns as attack guides, achieving 80% single-shot ASR via adversarial visual and textual optimization

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF
tool arXiv Jan 6, 2025 · Jan 2025

CALM: Curiosity-Driven Auditing for Large Language Models

Xiang Zheng, Longxiang Wang, Yi Liu et al. · City University of Hong Kong · Fudan University +1 more

RL-based auditing tool that automatically discovers black-box LLM prompts eliciting toxic or politically sensitive outputs

Prompt Injection nlp
PDF Code
attack arXiv Aug 1, 2025 · Aug 2025

LeakyCLIP: Extracting Training Data from CLIP

Yunhao Chen, Shujie Wang, Xin Wang et al. · Fudan University

Extracts private training images from CLIP embeddings via inversion, achieving 258% SSIM gain and enabling membership inference

Model Inversion Attack Membership Inference Attack visionmultimodal
PDF
defense arXiv Mar 2, 2026 · 5w ago

RA-Det: Towards Universal Detection of AI-Generated Images via Robustness Asymmetry

Xinchang Wang, Yunhao Chen, Yuechen Zhang et al. · Jiangnan University · Fudan University

Detects AI-generated images by exploiting feature drift asymmetry between real and synthetic images under structured perturbations

Output Integrity Attack vision
PDF Code
attack arXiv Aug 3, 2025 · Aug 2025

Simulated Ensemble Attack: Transferring Jailbreaks Across Fine-tuned Vision-Language Models

Ruofan Wang, Xin Wang, Yang Yao et al. · Fudan University · The University of Hong Kong

Grey-box adversarial image attack transfers jailbreaks to fine-tuned VLMs by simulating fine-tuning parameter trajectories on the base model

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
benchmark arXiv Mar 8, 2026 · 29d ago

Backdoor4Good: Benchmarking Beneficial Uses of Backdoors in LLMs

Yige Li, Wei Zhao, Zhe Li et al. · Singapore Management University · The University of Melbourne +1 more

Benchmarks beneficial uses of LLM backdoors for safety enforcement, access control, and watermarking via trigger conditioning

Model Poisoning Prompt Injection nlp
PDF Code