Latest papers

6 papers
attack arXiv Mar 5, 2026 · 4w ago

Towards Highly Transferable Vision-Language Attack via Semantic-Augmented Dynamic Contrastive Interaction

Yuanbo Li, Tianyang Xu, Cong Hu et al. · Jiangnan University · University of Surrey

Dynamic contrastive adversarial attack on VLP models using semantic augmentation to boost black-box transfer success

Input Manipulation Attack visionnlpmultimodal
PDF Code
attack arXiv Mar 5, 2026 · 4w ago

Multi-Paradigm Collaborative Adversarial Attack Against Multi-Modal Large Language Models

Yuanbo Li, Tianyang Xu, Cong Hu et al. · Jiangnan University · University of Surrey

Boosts adversarial transferability against black-box MLLMs by collaboratively optimizing perturbations across multiple visual learning paradigms

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF Code
attack arXiv Jan 21, 2026 · 10w ago

Deep Leakage with Generative Flow Matching Denoiser

Isaac Baglin, Xiatian Zhu, Simon Hadfield · University of Surrey

Gradient inversion attack using a Flow Matching generative prior to reconstruct private FL client data with superior fidelity under realistic defenses

Model Inversion Attack visionfederated-learning
PDF
defense arXiv Jan 21, 2026 · 10w ago

SpooFL: Spoofing Federated Learning

Isaac Baglin, Xiatian Zhu, Simon Hadfield · University of Surrey

Spoofing defense for federated learning that misdirects gradient inversion attackers into recovering convincing but irrelevant synthetic data

Model Inversion Attack federated-learningvision
PDF
defense arXiv Jan 7, 2026 · 12w ago

ARREST: Adversarial Resilient Regulation Enhancing Safety and Truth in Large Language Models

Sharanya Dasgupta, Arkaprabha Basu, Sujoy Nath et al. · Indian Statistical Institute · University of Surrey +1 more

Defends LLMs against jailbreaks and hallucinations by steering hidden states via GAN-trained intervention without fine-tuning

Prompt Injection nlp
PDF Code
defense arXiv Sep 1, 2025 · Sep 2025

Model Unmerging: Making Your Models Unmergeable for Secure Model Sharing

Zihao Wang, Enneng Yang, Lu Yin et al. · Sun Yat-Sen University · University of Surrey +1 more

Protects fine-tuned model IP by disrupting attention parameter space to prevent unauthorized model merging without affecting model utility

Model Theft visionnlp
PDF Code