defense 2026

Private and Robust Contribution Evaluation in Federated Learning

Delio Jaramillo Velez 1, Gergely Biczok 2, Alexandre Graell i Amat 3, Johan Ostman 4, Balazs Pejo 5

0 citations · 43 references · arXiv (Cornell University)

α

Published on arXiv

2602.21721

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Proposed scores consistently outperform Leave-One-Out, better approximate Shapley-induced client rankings, and improve both downstream model performance and misbehavior detection across multiple cross-silo FL settings.

Everybody-Else (EE) / Fair-Private (FP)

Novel technique introduced


Cross-silo federated learning allows multiple organizations to collaboratively train machine learning models without sharing raw data, but client updates can still leak sensitive information through inference attacks. Secure aggregation protects privacy by hiding individual updates, yet it complicates contribution evaluation, which is critical for fair rewards and detecting low-quality or malicious participants. Existing marginal-contribution methods, such as the Shapley value, are incompatible with secure aggregation, and practical alternatives, such as Leave-One-Out, are crude and rely on self-evaluation. We introduce two marginal-difference contribution scores compatible with secure aggregation. Fair-Private satisfies standard fairness axioms, while Everybody-Else eliminates self-evaluation and provides resistance to manipulation, addressing a largely overlooked vulnerability. We provide theoretical guarantees for fairness, privacy, robustness, and computational efficiency, and evaluate our methods on multiple medical image datasets and CIFAR10 in cross-silo settings. Our scores consistently outperform existing baselines, better approximate Shapley-induced client rankings, and improve downstream model performance as well as misbehavior detection. These results demonstrate that fairness, privacy, robustness, and practical utility can be achieved jointly in federated contribution evaluation, offering a principled solution for real-world cross-silo deployments.


Key Contributions

  • Two novel marginal-difference contribution evaluation scores (Fair-Private and Everybody-Else) compatible with secure aggregation in cross-silo FL
  • Everybody-Else eliminates self-evaluation and provides provable resistance to score manipulation by selfish clients, addressing an overlooked vulnerability
  • Theoretical guarantees for fairness axioms, privacy, computational efficiency, and manipulation resistance; empirical improvements in Shapley approximation and misbehavior detection on medical imaging and CIFAR-10

🛡️ Threat Analysis

Data Poisoning Attack

The paper addresses robustness against malicious and low-quality FL participants: the Everybody-Else score explicitly defends against selfish clients manipulating contribution scores, and empirical results show improved misbehavior detection — connecting to the Byzantine adversary threat model in federated learning. The secure aggregation compatibility also defends against inference attacks on individual client updates.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Datasets
CIFAR-10medical image datasets
Applications
cross-silo federated learningmedical image classification