defense 2026

Dynamic Meta-Layer Aggregation for Byzantine-Robust Federated Learning

Reek Das 1, Biplab Kanti Sen 2

0 citations

α

Published on arXiv

2603.16846

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Substantially improves model accuracy and resilience against Byzantine attacks while maintaining computational efficiency in non-IID heterogeneous settings

FedAOT

Novel technique introduced


Federated Learning (FL) is increasingly applied in sectors like healthcare, finance, and IoT, enabling collaborative model training while safeguarding user privacy. However, FL systems are susceptible to Byzantine adversaries that inject malicious updates, which can severely compromise global model performance. Existing defenses tend to focus on specific attack types and fail against untargeted strategies, such as multi-label flipping or combinations of noise and backdoor patterns. To overcome these limitations, we propose FedAOT-a novel defense mechanism that counters multi-label flipping and untargeted poisoning attacks using a metalearning-inspired adaptive aggregation framework. FedAOT dynamically weights client updates based on their reliability, suppressing adversarial influence without relying on predefined thresholds or restrictive attack assumptions. Notably, FedAOT generalizes effectively across diverse datasets and a wide range of attack types, maintaining robust performance even in previously unseen scenarios. Experimental results demonstrate that FedAOT substantially improves model accuracy and resilience while maintaining computational efficiency, offering a scalable and practical solution for secure federated learning.


Key Contributions

  • Meta-learning-based adaptive aggregation framework that dynamically weights client updates based on reliability
  • Unified defense against untargeted poisoning and multi-label flipping attacks without predefined thresholds or attack assumptions
  • Generalizes across diverse datasets and attack types while maintaining computational efficiency

🛡️ Threat Analysis

Data Poisoning Attack

Primary focus is defending against Byzantine adversaries who inject malicious updates and poisoned gradients to degrade global model performance in federated learning. The paper addresses untargeted poisoning attacks and multi-label flipping during training, which are data poisoning threats.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeuntargeted
Applications
federated learninghealthcarefinanceiot