defense 2025

Divided We Fall: Defending Against Adversarial Attacks via Soft-Gated Fractional Mixture-of-Experts with Randomized Adversarial Training

Mohammad Meymani , Roozbeh Razavi-Far

0 citations · 65 references · arXiv (Cornell University)

α

Published on arXiv

2512.20821

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

DWF outperforms state-of-the-art MoE-based defenses in both clean accuracy and robustness under white-box FGSM and PGD attacks on CIFAR-10 and SVHN

Divided We Fall (DWF)

Novel technique introduced


Machine learning is a powerful tool enabling full automation of a huge number of tasks without explicit programming. Despite recent progress of machine learning in different domains, these models have shown vulnerabilities when they are exposed to adversarial threats. Adversarial threats aim to hinder the machine learning models from satisfying their objectives. They can create adversarial perturbations, which are imperceptible to humans' eyes but have the ability to cause misclassification during inference. In this paper, we propose a defense system, which devises an adversarial training module within mixture-of-experts architecture to enhance its robustness against white-box evasion attacks. In our proposed defense system, we use nine pre-trained classifiers (experts) with ResNet-18 as their backbone. During end-to-end training, the parameters of all experts and the gating mechanism are jointly updated allowing further optimization of the experts. Our proposed defense system outperforms state-of-the-art MoE-based defenses under strong white-box FGSM and PGD evaluation on CIFAR-10 and SVHN.


Key Contributions

  • Soft-gated Fractional Mixture-of-Experts (MoE) architecture combining nine ResNet-18 experts trained on benign, FGSM, and PGD regimes for adversarial robustness
  • Joint end-to-end training of all expert parameters and the gating mechanism without freezing pretrained weights, improving both clean accuracy and robustness
  • Outperforms state-of-the-art MoE-based defenses under strong white-box FGSM and multi-step PGD evaluation on CIFAR-10 and SVHN

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes a defense against gradient-based adversarial perturbations (FGSM, PGD) that cause misclassification at inference time — the canonical ML01 threat. The entire contribution is a robustness defense evaluated under white-box evasion attacks.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CIFAR-10SVHNMNIST
Applications
image classification