defense 2025

Adaptive Decentralized Federated Learning for Robust Optimization

Shuyuan Wu 1, Feifei Wang 2, Yuan Gao 3, Rui Wang 1, Hansheng Wang 4

0 citations · 59 references · arXiv

α

Published on arXiv

2512.02852

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

aDFL achieves the oracle convergence property without requiring a majority of honest neighboring clients or prior knowledge of reliable nodes, outperforming existing Byzantine-robust DFL methods in numerical experiments.

aDFL (adaptive Decentralized Federated Learning)

Novel technique introduced


In decentralized federated learning (DFL), the presence of abnormal clients, often caused by noisy or poisoned data, can significantly disrupt the learning process and degrade the overall robustness of the model. Previous methods on this issue often require a sufficiently large number of normal neighboring clients or prior knowledge of reliable clients, which reduces the practical applicability of DFL. To address these limitations, we develop here a novel adaptive DFL (aDFL) approach for robust estimation. The key idea is to adaptively adjust the learning rates of clients. By assigning smaller rates to suspicious clients and larger rates to normal clients, aDFL mitigates the negative impact of abnormal clients on the global model in a fully adaptive way. Our theory does not put any stringent conditions on neighboring nodes and requires no prior knowledge. A rigorous convergence analysis is provided to guarantee the oracle property of aDFL. Extensive numerical experiments demonstrate the superior performance of the aDFL method.


Key Contributions

  • Proposes aDFL, an adaptive decentralized federated learning method that assigns smaller learning rates to suspicious clients and larger rates to normal clients without requiring prior knowledge of reliable nodes
  • Provides rigorous convergence analysis guaranteeing the oracle property of aDFL under no stringent conditions on neighboring nodes
  • Demonstrates superior empirical performance over existing Byzantine-robust DFL methods that typically require a majority of honest neighbors

🛡️ Threat Analysis

Data Poisoning Attack

The paper directly defends against clients with poisoned/corrupted data and Byzantine failures in decentralized federated learning. aDFL is a Byzantine-fault-tolerant aggregation mechanism: it assigns lower learning rates to suspicious (potentially malicious/poisoned) clients to mitigate their influence on the global model during training — a canonical ML02 defense scenario.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeuntargeted
Applications
decentralized federated learningrobust distributed optimization