defense 2025

Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning

Nicolas Riccieri Gardin Assumpcao , Leandro Villas

3 citations · 17 references · International Symposium on Com...

α

Published on arXiv

2511.02797

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

FPP achieves rapid convergence and successfully converges to a useful global model even in the presence of malicious participants performing model poisoning attacks, outperforming FedAvg, Power-of-Choice, and robust aggregation baselines.

FPP (Fast, Private, and Protected)

Novel technique introduced


Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants' assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.


Key Contributions

  • FPP framework combining reputation-based client scoring, round evaluation compatible with secure aggregation, and checkpoint-based recovery from severe model poisoning attacks
  • Novel client selection strategy that improves convergence on non-iid data while simultaneously defending against Byzantine/model poisoning participants
  • Dockerized evaluation environment comparing FPP against FedAvg, Power-of-Choice, Trimmed Mean, and Median aggregation under model poisoning scenarios

🛡️ Threat Analysis

Data Poisoning Attack

The paper's primary contribution is FPP, a defense against model poisoning attacks in federated learning where malicious clients send corrupted model updates to degrade global model performance — this is the canonical Byzantine FL attack scenario covered by ML02. FPP counters this via reputation-based participant scoring, round evaluation, and model checkpoint recovery.

Model Inversion Attack

FPP explicitly incorporates secure aggregation (Bonawitz et al.) to prevent gradient leakage, directly referencing the Zhu et al. deep gradient reconstruction threat. The paper explicitly frames this as a dual challenge — concealing gradients to prevent data reconstruction while still enabling quality assessment — making ML03 a genuine secondary category. The 'Private' pillar of FPP is specifically a defense against an adversary reconstructing training data from shared gradients.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningdistributed machine learning