defense 2026

SRFed: Mitigating Poisoning Attacks in Privacy-Preserving Federated Learning with Heterogeneous Data

Yiwen Lu

0 citations · 47 references · arXiv (Cornell University)

α

Published on arXiv

2602.16480

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

SRFed outperforms state-of-the-art baselines in privacy protection against server-side inference attacks and Byzantine robustness against poisoning attacks while reducing computation and communication overhead in Non-IID federated learning settings.

SRFed (DEFE + privacy-preserving defensive aggregation)

Novel technique introduced


Federated Learning (FL) enables collaborative model training without exposing clients' private data, and has been widely adopted in privacy-sensitive scenarios. However, FL faces two critical security threats: curious servers that may launch inference attacks to reconstruct clients' private data, and compromised clients that can launch poisoning attacks to disrupt model aggregation. Existing solutions mitigate these attacks by combining mainstream privacy-preserving techniques with defensive aggregation strategies. However, they either incur high computation and communication overhead or perform poorly under non-independent and identically distributed (Non-IID) data settings. To tackle these challenges, we propose SRFed, an efficient Byzantine-robust and privacy-preserving FL framework for Non-IID scenarios. First, we design a decentralized efficient functional encryption (DEFE) scheme to support efficient model encryption and non-interactive decryption. DEFE also eliminates third-party reliance and defends against server-side inference attacks. Second, we develop a privacy-preserving defensive model aggregation mechanism based on DEFE. This mechanism filters poisonous models under Non-IID data by layer-wise projection and clustering-based analysis. Theoretical analysis and extensive experiments show that SRFed outperforms state-of-the-art baselines in privacy protection, Byzantine robustness, and efficiency.


Key Contributions

  • Decentralized Efficient Functional Encryption (DEFE) scheme enabling encrypted model aggregation without a trusted third party, defending against server-side data reconstruction attacks
  • Privacy-preserving Byzantine-robust aggregation mechanism using layer-wise projection and clustering to detect and filter poisonous model updates under Non-IID data distributions
  • SRFed framework combining DEFE and defensive aggregation, outperforming SOTA baselines in privacy protection, Byzantine robustness, and communication/computation efficiency

🛡️ Threat Analysis

Data Poisoning Attack

Primary contribution includes a privacy-preserving Byzantine-robust aggregation mechanism that filters poisonous model updates from compromised clients in Non-IID federated learning settings via layer-wise projection and clustering analysis.

Model Inversion Attack

The DEFE (Decentralized Efficient Functional Encryption) scheme is explicitly designed to defend against curious server-side inference attacks that aim to reconstruct clients' private training data from shared model updates/gradients.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningprivacy-preserving collaborative model trainingautonomous drivingmedical imagingrecommendation systems