benchmark arXiv Sep 23, 2025 · Sep 2025
Hesam Hosseini, Ying Cao, Ali H. Sayed · École Polytechnique Fédérale de Lausanne
Derives stability-based generalization bounds for adversarial training in decentralized diffusion networks, showing robust overfitting worsens with perturbation radius and iterations
Input Manipulation Attack federated-learning
Algorithmic stability is an established tool for analyzing generalization. While adversarial training enhances model robustness, it often suffers from robust overfitting and an enlarged generalization gap. Although recent work has established the convergence of adversarial training in decentralized networks, its generalization properties remain unexplored. This work presents a stability-based generalization analysis of adversarial training under the diffusion strategy for convex losses. We derive a bound showing that the generalization error grows with both the adversarial perturbation strength and the number of training steps, a finding consistent with single-agent case but novel for decentralized settings. Numerical experiments on logistic regression validate these theoretical predictions.
traditional_ml federated École Polytechnique Fédérale de Lausanne