attack 2025

POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning

Kuai Yu 1, Xiaoyu Wu 2, Peishen Yan 2, Qingqian Yang 3, Linshan Jiang 4, Hao Wang 5, Yang Hua 6, Tao Song 2, Haibing Guan 2

0 citations · 29 references · arXiv

α

Published on arXiv

2510.19056

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

POLAR outperforms the latest layer-wise backdoor attack methods by up to 40% in backdoor success rate against six SOTA FL defenses.

POLAR

Novel technique introduced


Federated Learning (FL) enables decentralized model training across multiple clients without exposing local data, but its distributed feature makes it vulnerable to backdoor attacks. Despite early FL backdoor attacks modifying entire models, recent studies have explored the concept of backdoor-critical (BC) layers, which poison the chosen influential layers to maintain stealthiness while achieving high effectiveness. However, existing BC layers approaches rely on rule-based selection without consideration of the interrelations between layers, making them ineffective and prone to detection by advanced defenses. In this paper, we propose POLAR (POlicy-based LAyerwise Reinforcement learning), the first pipeline to creatively adopt RL to solve the BC layer selection problem in layer-wise backdoor attack. Different from other commonly used RL paradigm, POLAR is lightweight with Bernoulli sampling. POLAR dynamically learns an attack strategy, optimizing layer selection using policy gradient updates based on backdoor success rate (BSR) improvements. To ensure stealthiness, we introduce a regularization constraint that limits the number of modified layers by penalizing large attack footprints. Extensive experiments demonstrate that POLAR outperforms the latest attack methods by up to 40% against six state-of-the-art (SOTA) defenses.


Key Contributions

  • First RL-based pipeline (POLAR) for backdoor-critical (BC) layer selection in federated learning, replacing brittle rule-based heuristics with a learned policy.
  • Lightweight Bernoulli-sampling policy gradient that dynamically optimizes layer selection based on backdoor success rate (BSR) improvements.
  • Regularization constraint penalizing large attack footprints to enforce stealthiness while maintaining high attack effectiveness.

🛡️ Threat Analysis

Model Poisoning

POLAR is a backdoor/trojan attack: malicious FL clients inject hidden, trigger-activated behavior into the global model by poisoning selected layers. The RL-based layer selection policy improves stealthiness and bypass of backdoor defenses — this is classic trigger-based model poisoning, which per the guidelines maps to ML10 (not ML02, since the goal is a targeted backdoor, not general model degradation).


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timetargetedwhite_box
Applications
federated learning