α

Published on arXiv

2604.03862

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Demonstrates effectiveness against poisoning attacks in asynchronous FL across multiple real-world datasets while addressing straggler problem

SecureAFL

Novel technique introduced


Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model via a server without sharing their private training data. In traditional FL, the system follows a synchronous approach, where the server waits for model updates from numerous clients before aggregating them to update the global model. However, synchronous FL is hindered by the straggler problem. To address this, the asynchronous FL architecture allows the server to update the global model immediately upon receiving any client's local model update. Despite its advantages, the decentralized nature of asynchronous FL makes it vulnerable to poisoning attacks. Several defenses tailored for asynchronous FL have been proposed, but these mechanisms remain susceptible to advanced attacks or rely on unrealistic server assumptions. In this paper, we introduce SecureAFL, an innovative framework designed to secure asynchronous FL against poisoning attacks. SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients. Additionally, it utilizes Byzantine-robust aggregation techniques, such as coordinate-wise median, to integrate the received and estimated updates. Extensive experiments on various real-world datasets demonstrate the effectiveness of SecureAFL.


Key Contributions

  • Framework for detecting and discarding anomalous updates in asynchronous FL
  • Estimation mechanism for missing client contributions to maintain model quality
  • Byzantine-robust aggregation using coordinate-wise median for received and estimated updates

🛡️ Threat Analysis

Data Poisoning Attack

Defends against data poisoning attacks in federated learning where malicious clients send corrupted model updates to degrade global model performance. Uses Byzantine-robust aggregation and anomaly detection to filter poisoned updates.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_time
Applications
asynchronous federated learning