defense 2025

ASMR: Angular Support for Malfunctioning Client Resilience in Federated Learning

Mirko Konstantin , Moritz Fuchs , Anirban Mukhopadhyay

0 citations · In Medical Imaging with Deep L...

α

Published on arXiv

2508.02414

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

ASMR detects both malicious and unreliable federated learning clients using angular distance without requiring hyperparameters or knowledge of the malicious client population, outperforming SOTA defenses including Multi-Krum on histopathology classification

ASMR (Angular Support for Malfunctioning Client Resilience)

Novel technique introduced


Federated Learning (FL) allows the training of deep neural networks in a distributed and privacy-preserving manner. However, this concept suffers from malfunctioning updates sent by the attending clients that cause global model performance degradation. Reasons for this malfunctioning might be technical issues, disadvantageous training data, or malicious attacks. Most of the current defense mechanisms are meant to require impractical prerequisites like knowledge about the number of malfunctioning updates, which makes them unsuitable for real-world applications. To counteract these problems, we introduce a novel method called Angular Support for Malfunctioning Client Resilience (ASMR), that dynamically excludes malfunctioning clients based on their angular distance. Our novel method does not require any hyperparameters or knowledge about the number of malfunctioning clients. Our experiments showcase the detection capabilities of ASMR in an image classification task on a histopathological dataset, while also presenting findings on the significance of dynamically adapting decision boundaries.


Key Contributions

  • Novel angular client support concept that measures pairwise angular distances between model updates to identify malfunctioning clients
  • ASMR dynamically adapts the exclusion threshold each round without requiring prior knowledge of the number of malicious clients or any hyperparameters
  • Demonstrated detection capability against Additive Noise Attacks, Sign Flipping Attacks, and unreliable clients on a histopathological image classification dataset

🛡️ Threat Analysis

Data Poisoning Attack

Paper defends against Byzantine clients sending malicious updates (Additive Noise Attacks, Sign Flipping Attacks) designed to degrade global model performance in federated learning — classic data/model poisoning via malicious participants. ASMR is a robust aggregation defense that dynamically excludes malfunctioning clients, directly fitting the FL Byzantine poisoning defense category.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timeuntargeted
Datasets
histopathological dataset
Applications
federated learningmedical image classificationdigital pathology