Beyond Denial-of-Service: The Puppeteer's Attack for Fine-Grained Control in Ranking-Based Federated Learning
Zhihao Chen 1, Zirui Gong 2, Jianting Ning 1, Yanjun Zhang 2, Leo Yu Zhang 2
Published on arXiv
2601.14687
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
ECA achieves fine-grained accuracy control with an average error of only 0.224%, outperforming conventional DoS baseline by up to 17x while evading nine state-of-the-art Byzantine-robust aggregation rules across seven benchmark datasets.
Edge Control Attack (ECA)
Novel technique introduced
Federated Rank Learning (FRL) is a promising Federated Learning (FL) paradigm designed to be resilient against model poisoning attacks due to its discrete, ranking-based update mechanism. Unlike traditional FL methods that rely on model updates, FRL leverages discrete rankings as a communication parameter between clients and the server. This approach significantly reduces communication costs and limits an adversary's ability to scale or optimize malicious updates in the continuous space, thereby enhancing its robustness. This makes FRL particularly appealing for applications where system security and data privacy are crucial, such as web-based auction and bidding platforms. While FRL substantially reduces the attack surface, we demonstrate that it remains vulnerable to a new class of local model poisoning attack, i.e., fine-grained control attacks. We introduce the Edge Control Attack (ECA), the first fine-grained control attack tailored to ranking-based FL frameworks. Unlike conventional denial-of-service (DoS) attacks that cause conspicuous disruptions, ECA enables an adversary to precisely degrade a competitor's accuracy to any target level while maintaining a normal-looking convergence trajectory, thereby avoiding detection. ECA operates in two stages: (i) identifying and manipulating Ascending and Descending Edges to align the global model with the target model, and (ii) widening the selection boundary gap to stabilize the global model at the target accuracy. Extensive experiments across seven benchmark datasets and nine Byzantine-robust aggregation rules (AGRs) show that ECA achieves fine-grained accuracy control with an average error of only 0.224%, outperforming the baseline by up to 17x. Our findings highlight the need for stronger defenses against advanced poisoning attacks. Our code is available at: https://github.com/Chenzh0205/ECA
Key Contributions
- Introduces ECA (Edge Control Attack), the first fine-grained control attack tailored to Federated Rank Learning, enabling an adversary to steer global model accuracy to any arbitrary target value
- Proposes a two-stage attack mechanism: (i) manipulating Ascending and Descending Edges to align the global model with a target model, and (ii) widening the selection boundary gap to stabilize at the target accuracy
- Demonstrates that ECA achieves an average accuracy control error of only 0.224% across seven datasets and nine Byzantine-robust aggregation rules, outperforming baseline DoS attacks by up to 17x while maintaining a normal-looking convergence trajectory
🛡️ Threat Analysis
The Edge Control Attack is a local model poisoning attack in federated learning where malicious clients manipulate their ranking-based updates to degrade the global model's accuracy to a precise target level — the canonical Byzantine/model poisoning threat in FL. The paper's threat model matches ML02: malicious participants corrupting the shared model through their updates, explicitly evaluated against nine Byzantine-robust aggregation rules.