Latest papers

4 papers
attack arXiv Feb 16, 2026 · 7w ago

Exploiting Layer-Specific Vulnerabilities to Backdoor Attack in Federated Learning

Mohammad Hadi Foroughi, Seyed Hamed Rastegar, Mohammad Sabokrou et al. · University of Tehran · Institute for Research in Fundamental Sciences (IPM) +2 more

Layer Smoothing Attack exploits backdoor-critical neural network layers in federated learning, achieving 97% success rate while bypassing SOTA defenses

Model Poisoning visionfederated-learning
PDF
defense arXiv Nov 7, 2025 · Nov 2025

MedFedPure: A Medical Federated Framework with MAE-based Detection and Diffusion Purification for Inference-Time Attacks

Mohammad Karami, Mohammad Reza Nemati, Aidin Kazemi et al. · University of Tehran · Max Planck Institute for Brain Research +2 more

Federated defense combining MAE detection and diffusion purification to protect brain MRI classifiers from adversarial attacks at inference time

Input Manipulation Attack visionfederated-learning
PDF
defense arXiv Oct 3, 2025 · Oct 2025

Zero-Shot Robustness of Vision Language Models Via Confidence-Aware Weighting

Nikoo Naghavian, Mostafa Tavassolipour · University of Tehran

Adversarial fine-tuning defense for CLIP that boosts zero-shot robustness via confidence-weighted KL loss and feature alignment regularization

Input Manipulation Attack visionmultimodal
PDF
defense arXiv Sep 13, 2025 · Sep 2025

Robustifying Diffusion-Denoised Smoothing Against Covariate Shift

Ali Hedayatnia, Mostafa Tavassolipour, Babak Nadjar Araabi et al. · University of Tehran

Improves certified l2-adversarial robustness by training classifiers to resist covariate shift from diffusion denoisers in randomized smoothing

Input Manipulation Attack vision
PDF Code