defense 2025

CoSIFL: Collaborative Secure and Incentivized Federated Learning with Differential Privacy

Zhanhong Xie , Meifan Zhang , Lihua Yin

0 citations · 41 references · arXiv

α

Published on arXiv

2509.23190

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

CoSIFL outperforms state-of-the-art FL methods in model robustness against Byzantine and inference attacks while reducing total server incentive costs on standard benchmarks.

CoSIFL

Novel technique introduced


Federated learning (FL) has emerged as a promising paradigm for collaborative model training while preserving data locality. However, it still faces challenges from malicious or compromised clients, as well as difficulties in incentivizing participants to contribute high-quality data under strict privacy requirements. Motivated by these considerations, we propose CoSIFL, a novel framework that integrates proactive alarming for robust security and local differential privacy (LDP) for inference attacks, together with a Stackelberg-based incentive scheme to encourage client participation and data sharing. Specifically, CoSIFL uses an active alarming mechanism and robust aggregation to defend against Byzantine and inference attacks, while a Tullock contest-inspired incentive module rewards honest clients for both data contributions and reliable alarm triggers. We formulate the interplay between the server and clients as a two-stage game: in the first stage, the server determines total rewards, selects participants, and fixes global iteration settings, whereas in the second stage, each client decides its mini-batch size, privacy noise scale, and alerting strategy. We prove that the server-client game admits a unique equilibrium, and analyze how clients' multi-dimensional attributes - such as non-IID degrees and privacy budgets - jointly affect system efficiency. Experimental results on standard benchmarks demonstrate that CoSIFL outperforms state-of-the-art solutions in improving model robustness and reducing total server costs, highlighting the effectiveness of our integrated design.


Key Contributions

  • CoSIFL framework integrating proactive alarming and robust aggregation to defend against Byzantine (poisoning) attacks in FL
  • Local differential privacy applied to client gradients to defend against gradient inversion/inference attacks
  • Stackelberg-game-based incentive mechanism (Tullock contest-inspired) that rewards honest clients for data quality and alarm participation, with provably unique equilibrium

🛡️ Threat Analysis

Data Poisoning Attack

Proposes robust aggregation and a proactive alarming mechanism to defend against Byzantine clients submitting poisoned model updates (sign-flipping, label-flipping, targeted model poisoning) in federated learning — the core threat model of ML02 applied to FL.

Model Inversion Attack

Explicitly defends against inference attacks where adversaries reconstruct clients' private training data from shared model updates via gradient inversion; LDP (local differential privacy noise on gradients) is used as the primary countermeasure against this reconstruction threat.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeuntargetedtargetedgrey_box
Applications
federated learning systems