defense 2025

On Hyperparameters and Backdoor-Resistance in Horizontal Federated Learning

Simon Lachnit , Ghassan Karame

0 citations

α

Published on arXiv

2509.05192

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Properly tuned benign client hyperparameters reduce the 50%-lifespan of the A3FL backdoor attack by 98.6% without any defense mechanism, incurring only a 2.9 percentage point drop in clean task accuracy.

Hyperparameter-Aware Backdoor Resistance

Novel technique introduced


Horizontal Federated Learning (HFL) is particularly vulnerable to backdoor attacks as adversaries can easily manipulate both the training data and processes to execute sophisticated attacks. In this work, we study the impact of training hyperparameters on the effectiveness of backdoor attacks and defenses in HFL. More specifically, we show both analytically and by means of measurements that the choice of hyperparameters by benign clients does not only influence model accuracy but also significantly impacts backdoor attack success. This stands in sharp contrast with the multitude of contributions in the area of HFL security, which often rely on custom ad-hoc hyperparameter choices for benign clients$\unicode{x2013}$leading to more pronounced backdoor attack strength and diminished impact of defenses. Our results indicate that properly tuning benign clients' hyperparameters$\unicode{x2013}$such as learning rate, batch size, and number of local epochs$\unicode{x2013}$can significantly curb the effectiveness of backdoor attacks, regardless of the malicious clients' settings. We support this claim with an extensive robustness evaluation of state-of-the-art attack-defense combinations, showing that carefully chosen hyperparameters yield across-the-board improvements in robustness without sacrificing main task accuracy. For example, we show that the 50%-lifespan of the strong A3FL attack can be reduced by 98.6%, respectively$\unicode{x2013}$all without using any defense and while incurring only a 2.9 percentage points drop in clean task accuracy.


Key Contributions

  • Analytical and empirical demonstration that benign clients' hyperparameters (learning rate, batch size, local epochs) significantly affect backdoor attack success in HFL, independent of malicious clients' settings
  • Extensive robustness evaluation showing properly tuned hyperparameters reduce the 50%-lifespan of the A3FL attack by 98.6% without any explicit defense and with only 2.9pp accuracy drop
  • Critique of existing HFL security literature for using ad-hoc hyperparameter choices that artificially inflate backdoor attack strength and deflate defense effectiveness

🛡️ Threat Analysis

Model Poisoning

Directly studies trigger-based backdoor attacks in Horizontal Federated Learning (A3FL and other state-of-the-art backdoor attacks) and proposes hyperparameter tuning as a passive defense — classic ML10 threat model with hidden targeted behavior activated by specific triggers.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timegrey_box
Datasets
CIFAR-10
Applications
federated learningimage classification