Latest papers

3 papers
benchmark arXiv Feb 1, 2026 · 9w ago

Statistical MIA: Rethinking Membership Inference Attack for Reliable Unlearning Auditing

Jialong Sun, Zeming Wei, Jiaxuan Zou et al. · Shenzhen University of Advanced Technology · Peking University +2 more

Proposes statistical MIA framework that uses distribution tests instead of shadow models to reliably audit machine unlearning with confidence intervals

Membership Inference Attack vision
PDF
defense arXiv Nov 21, 2025 · Nov 2025

MMT-ARD: Multimodal Multi-Teacher Adversarial Distillation for Robust Vision-Language Models

Yuqi Li, Junhao Dong, Chuanguang Yang et al. · Nanyang Technological University · Institute of Computing Technology +4 more

Defends VLMs against adversarial examples via dual multi-teacher distillation, gaining +4.32% robust accuracy with 2.3x training speedup

Input Manipulation Attack visionmultimodal
2 citations PDF Code
attack arXiv Sep 16, 2025 · Sep 2025

BAPFL: Exploring Backdoor Attacks Against Prototype-based Federated Learning

Honghong Zeng, Jiong Lou, Zhe Wang et al. · Shanghai Jiao Tong University · Yancheng Blockchain Research Institute +1 more

First backdoor attack targeting prototype-based FL via prototype poisoning and optimized per-label triggers

Model Poisoning visionfederated-learning
PDF