defense 2025

DSFL: A Dual-Server Byzantine-Resilient Federated Learning Framework via Group-Based Secure Aggregation

Charuka Herath , Yogachandran Rahulamathavan , Varuna De Silva , Sangarapillai Lambotharan

0 citations

α

Published on arXiv

2509.08449

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Achieves 97.15% accuracy on CIFAR-10 under 30% Byzantine participants (vs. FedAvg collapsing to 9.39%), with only 55.9 ms runtime and 1088 KB communication overhead per round.

DSFL

Novel technique introduced


Federated Learning (FL) enables decentralized model training without sharing raw data, offering strong privacy guarantees. However, existing FL protocols struggle to defend against Byzantine participants, maintain model utility under non-independent and identically distributed (non-IID) data, and remain lightweight for edge devices. Prior work either assumes trusted hardware, uses expensive cryptographic tools, or fails to address privacy and robustness simultaneously. We propose DSFL, a Dual-Server Byzantine-Resilient Federated Learning framework that addresses these limitations using a group-based secure aggregation approach. Unlike LSFL, which assumes non-colluding semi-honest servers, DSFL removes this dependency by revealing a key vulnerability: privacy leakage through client-server collusion. DSFL introduces three key innovations: (1) a dual-server secure aggregation protocol that protects updates without encryption or key exchange, (2) a group-wise credit-based filtering mechanism to isolate Byzantine clients based on deviation scores, and (3) a dynamic reward-penalty system for enforcing fair participation. DSFL is evaluated on MNIST, CIFAR-10, and CIFAR-100 under up to 30 percent Byzantine participants in both IID and non-IID settings. It consistently outperforms existing baselines, including LSFL, homomorphic encryption methods, and differential privacy approaches. For example, DSFL achieves 97.15 percent accuracy on CIFAR-10 and 68.60 percent on CIFAR-100, while FedAvg drops to 9.39 percent under similar threats. DSFL remains lightweight, requiring only 55.9 ms runtime and 1088 KB communication per round.


Key Contributions

  • Dual-server secure aggregation protocol that protects gradient confidentiality without encryption or pairwise key exchange, closing a collusion vulnerability in prior single-server designs like LSFL
  • Group-wise credit-based filtering mechanism that identifies and suppresses Byzantine clients using deviation scoring across participant groups
  • Dynamic reward-penalty system for fair adaptive participation enforcement across FL training rounds

🛡️ Threat Analysis

Data Poisoning Attack

The core contribution explicitly defends against Byzantine participants in federated learning — malicious clients sending corrupted/arbitrary model updates to degrade global model performance. The group-wise credit-based filtering mechanism and dynamic reward-penalty system are Byzantine-fault-tolerant aggregation defenses, which the ML02 guidelines explicitly include.

Model Inversion Attack

The dual-server secure aggregation protocol is designed to defend against gradient leakage attacks, including GAN-based reconstruction of private training data from shared gradients. The paper explicitly identifies 'privacy leakage through client-server collusion' as a threat and DSFL's non-cryptographic aggregation prevents adversaries from reconstructing training data from observed updates.


Details

Domains
federated-learning
Model Types
federatedcnn
Threat Tags
training_timegrey_boxuntargeted
Datasets
MNISTCIFAR-10CIFAR-100
Applications
federated learningedge computing