attack 2025

Stealth by Conformity: Evading Robust Aggregation through Adaptive Poisoning

Ryan McGaughey , Jesus Martinez del Rincon , Ihsen Alouani

0 citations

α

Published on arXiv

2509.08746

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

CHAMP achieves an average 47.07% increase in attack success rate against nine robust aggregation defenses by keeping malicious updates within the benign distribution.

CHAMP (Chameleon Poisoning)

Novel technique introduced


Federated Learning (FL) is a distributed learning paradigm designed to address privacy concerns. However, FL is vulnerable to poisoning attacks, where Byzantine clients compromise the integrity of the global model by submitting malicious updates. Robust aggregation methods have been widely adopted to mitigate such threats, relying on the core assumption that malicious updates are inherently out-of-distribution and can therefore be identified and excluded before aggregating client updates. In this paper, we challenge this underlying assumption by showing that a model can be poisoned while keeping malicious updates within the main distribution. We propose Chameleon Poisoning (CHAMP), an adaptive and evasive poisoning strategy that exploits side-channel feedback from the aggregation process to guide the attack. Specifically, the adversary continuously infers whether its malicious contribution has been incorporated into the global model and adapts accordingly. This enables a dynamic adjustment of the local loss function, balancing a malicious component with a camouflaging component, thereby increasing the effectiveness of the poisoning while evading robust aggregation defenses. CHAMP enables more effective and evasive poisoning, highlighting a fundamental limitation of existing robust aggregation defenses and underscoring the need for new strategies to secure federated learning against sophisticated adversaries. Our approach is evaluated in two datasets reaching an average increase of 47.07% in attack success rate against nine robust aggregation defenses.


Key Contributions

  • CHAMP: an adaptive FL poisoning attack that uses side-channel feedback from the aggregation process (inferring whether malicious updates were incorporated) to dynamically adjust the attacker's local loss function
  • Demonstrates that robust aggregation defenses' core assumption — that malicious updates are out-of-distribution — can be circumvented by keeping malicious updates within the benign gradient distribution
  • Evaluated against nine robust aggregation defenses, achieving an average 47.07% increase in attack success rate across two datasets

🛡️ Threat Analysis

Data Poisoning Attack

The attack is framed as a Byzantine poisoning attack in FL, explicitly targeting robust aggregation mechanisms (FLTrust, Krum, etc.) that defend against malicious client model updates. The evasion strategy of keeping updates within the benign distribution directly challenges the core assumption of Byzantine-fault-tolerant aggregation defenses.

Model Poisoning

CHAMP is a federated learning backdoor attack — malicious clients inject targeted behavior ('malicious component') measured via 'attack success rate', the canonical metric for backdoor attacks. The core contribution is making this backdoor evasive against nine robust aggregation defenses.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
grey_boxtraining_timetargeted
Datasets
two unspecified FL evaluation datasets
Applications
federated learning systems