defense 2026

FAROS: Robust Federated Learning with Adaptive Scaling against Backdoor Attacks

Chenyu Hu 1, Qiming Hu 2, Sinan Chen 3, Nianyu Li 4, Mingyue Zhang 1, Jialong Li 5

0 citations · 53 references · arXiv

α

Published on arXiv

2601.01833

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

FAROS outperforms state-of-the-art defenses in both attack success rate reduction and main task accuracy across various datasets, models, and attack scenarios.

FAROS

Novel technique introduced


Federated Learning (FL) enables multiple clients to collaboratively train a shared model without exposing local data. However, backdoor attacks pose a significant threat to FL. These attacks aim to implant a stealthy trigger into the global model, causing it to mislead on inputs that possess a specific trigger while functioning normally on benign data. Although pre-aggregation detection is a main defense direction, existing state-of-the-art defenses often rely on fixed defense parameters. This reliance makes them vulnerable to single-point-of-failure risks, rendering them less effective against sophisticated attackers. To address these limitations, we propose FAROS, an enhanced FL framework that incorporates Adaptive Differential Scaling (ADS) and Robust Core-set Computing (RCC). The ADS mechanism adjusts the defense's sensitivity dynamically, based on the dispersion of uploaded gradients by clients in each round. This allows it to counter attackers who strategically shift between stealthiness and effectiveness. Furthermore, the RCC effectively mitigates the risk of single-point failure by computing the centroid of a core set comprising clients with the highest confidence. We conducted extensive experiments across various datasets, models, and attack scenarios. The results demonstrate that our method outperforms current defenses in both attack success rate and main task accuracy.


Key Contributions

  • Adaptive Differential Scaling (ADS) that dynamically adjusts defense sensitivity based on per-round gradient dispersion, countering attackers who shift between stealthiness and effectiveness
  • Robust Core-set Computing (RCC) that eliminates single-point-of-failure risks by computing the centroid of a high-confidence client core set rather than relying on a single seed gradient
  • FAROS framework combining ADS and RCC to outperform state-of-the-art FL backdoor defenses across diverse datasets, models, and attack scenarios

🛡️ Threat Analysis

Model Poisoning

Primary focus is defending against backdoor/trojan attacks in federated learning, where malicious clients inject trigger-based hidden behavior into the global model via crafted gradients.


Details

Domains
federated-learningvision
Model Types
federated
Threat Tags
training_timetargetedgrey_box
Applications
federated learningdistributed model training