defense 2025

FLAegis: A Two-Layer Defense Framework for Federated Learning Against Poisoning Attacks

Enrique Mármol Campos , Aurora González Vidal , José Luis Hernández Ramos , Antonio Skarmeta

0 citations

α

Published on arXiv

2508.18737

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

FLAegis outperforms state-of-the-art FL defenses in both Byzantine client detection precision and final model accuracy across five poisoning attacks including adaptive optimization-based strategies.

FLAegis

Novel technique introduced


Federated Learning (FL) has become a powerful technique for training Machine Learning (ML) models in a decentralized manner, preserving the privacy of the training datasets involved. However, the decentralized nature of FL limits the visibility of the training process, relying heavily on the honesty of participating clients. This assumption opens the door to malicious third parties, known as Byzantine clients, which can poison the training process by submitting false model updates. Such malicious clients may engage in poisoning attacks, manipulating either the dataset or the model parameters to induce misclassification. In response, this study introduces FLAegis, a two-stage defensive framework designed to identify Byzantine clients and improve the robustness of FL systems. Our approach leverages symbolic time series transformation (SAX) to amplify the differences between benign and malicious models, and spectral clustering, which enables accurate detection of adversarial behavior. Furthermore, we incorporate a robust FFT-based aggregation function as a final layer to mitigate the impact of those Byzantine clients that manage to evade prior defenses. We rigorously evaluate our method against five poisoning attacks, ranging from simple label flipping to adaptive optimization-based strategies. Notably, our approach outperforms state-of-the-art defenses in both detection precision and final model accuracy, maintaining consistently high performance even under strong adversarial conditions.


Key Contributions

  • SAX-based symbolic time series transformation to amplify divergence between benign and malicious client model updates for clustering-based Byzantine detection
  • Spectral clustering stage that accurately identifies and filters Byzantine clients before aggregation
  • FFT-based robust aggregation function as a second defense layer to mitigate Byzantine clients that evade the detection stage

🛡️ Threat Analysis

Data Poisoning Attack

FLAegis directly defends against Byzantine clients submitting malicious model updates (label flipping, optimization-based poisoning) to degrade global FL model performance — this is the canonical ML02 Byzantine FL poisoning scenario. The paper evaluates across five such poisoning attacks and proposes detection and aggregation defenses.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningdistributed machine learning