attack 2025

Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective

Zhongjian Zhang 1, Mengmei Zhang 2, Xiao Wang 3, Lingjuan Lyu 4, Bo Yan 1, Junping Du 1, Chuan Shi 1

0 citations

α

Published on arXiv

2501.03301

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Spattack effectively prevents model convergence and breaks down Byzantine-robust defenses in federated recommendation systems using only a few malicious clients, demonstrating that existing FL Byzantine defenses are insufficient under sparse aggregation.

Spattack

Novel technique introduced


To preserve user privacy in recommender systems, federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively. Unlike FL, FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL. Recently, as an essential principle of FL, model security has received increasing attention, especially for Byzantine attacks, where malicious clients can send arbitrary updates. The problem of exploring the Byzantine robustness of FR is particularly critical since in the domains applying FR, e.g., e-commerce, malicious clients can be injected easily by registering new accounts. However, existing Byzantine works neglect the unique sparse aggregation of FR, making them unsuitable for our problem. Thus, we make the first effort to investigate Byzantine attacks on FR from the perspective of sparse aggregation, which is non-trivial: it is not clear how to define Byzantine robustness under sparse aggregations and design Byzantine attacks under limited knowledge/capability. In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit. Then we propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability. Extensive experimental results demonstrate that Spattack can effectively prevent convergence and even break down defenses under a few malicious clients, raising alarms for securing FR systems.


Key Contributions

  • Reformulates Byzantine robustness for federated recommendation by defining item-level sparse aggregation as the smallest execution unit
  • Proposes Spattack, a family of Byzantine attack strategies categorized by adversary knowledge and capability that exploit sparse aggregation vulnerabilities
  • Demonstrates empirically that Spattack prevents convergence and defeats existing defenses with only a small number of malicious clients

🛡️ Threat Analysis

Data Poisoning Attack

Spattack is a family of Byzantine attacks in federated learning where malicious clients send arbitrary model updates (targeting item embeddings via sparse aggregation) to degrade global model performance — the canonical ML02 threat. The goal is preventing convergence, not inserting a targeted hidden trigger.


Details

Domains
federated-learning
Model Types
federatedtraditional_ml
Threat Tags
white_boxblack_boxtraining_timeuntargeted
Datasets
MovieLensFilmTrust
Applications
federated recommendation systemse-commerce recommender systems