defense 2026

Mitigating Backdoor Attacks in Federated Learning Using PPA and MiniMax Game Theory

Osama Wehbi 1, Sarhad Arisdakessian 1, Omar Abdel Wahab 1, Anderson Avila 2, Azzam Mourad 3,4, Hadi Otrok 3

0 citations

α

Published on arXiv

2603.28652

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Reduces backdoor attack success rate to 1.1%-11% compared to 23%-76% for state-of-the-art defenses (RDFL, RoPE) while maintaining 95%-98% normal accuracy

FedBBA

Novel technique introduced


Federated Learning (FL) is witnessing wider adoption due to its ability to benefit from large amounts of scattered data while preserving privacy. However, despite its advantages, federated learning suffers from several setbacks that directly impact the accuracy, and the integrity of the global model it produces. One of these setbacks is the presence of malicious clients who actively try to harm the global model by injecting backdoor data into their local models while trying to evade detection. The objective of such clients is to trick the global model into making false predictions during inference, thereby compromising the integrity and trustworthiness of the global model on which honest stakeholders rely. To mitigate such mischievous behavior, we propose FedBBA (Federated Backdoor and Behavior Analysis). The proposed model aims to dampen the effect of such clients on the final accuracy, creating more resilient federated learning environments. We engineer our approach through the combination of (1) a reputation system to evaluate and track client behavior, (2) an incentive mechanism to reward honest participation and penalize malicious behavior, and (3) game theoretical models with projection pursuit analysis (PPA) to dynamically identify and minimize the impact of malicious clients on the global model. Extensive simulations on the German Traffic Sign Recognition Benchmark (GTSRB) and Belgium Traffic Sign Classification (BTSC) datasets demonstrate that FedBBA reduces the backdoor attack success rate to approximately 1.1%--11% across various attack scenarios, significantly outperforming state-of-the-art defenses like RDFL and RoPE, which yielded attack success rates between 23% and 76%, while maintaining high normal task accuracy (~95%--98%).


Key Contributions

  • FedBBA framework combining reputation systems, incentive mechanisms, and MiniMax game theory with Projection Pursuit Analysis (PPA) to detect and mitigate backdoor attacks
  • Dynamic client behavior tracking and adaptive weighting mechanism based on PPA scores, reputation values, and gradient differences
  • Reduces backdoor attack success rate to 1.1%-11% across various attack scenarios while maintaining 95%-98% normal task accuracy

🛡️ Threat Analysis

Data Poisoning Attack

Paper also addresses data poisoning in federated learning where malicious clients corrupt training through poisoned local updates.

Model Poisoning

Primary focus is defending against backdoor attacks in federated learning where malicious clients inject triggers into models to cause targeted misclassification.


Details

Domains
visionfederated-learning
Model Types
federatedcnn
Threat Tags
training_timetargeteduntargeted
Datasets
GTSRBBTSC
Applications
image classificationtraffic sign recognitionautonomous driving