defense 2025

Robust Federated Inference

Akash Dhasade 1, Sadegh Farhadkhani 1, Rachid Guerraoui 1, Nirupam Gupta 2, Maxime Jacovella 1, Anne-Marie Kermarrec 1, Rafael Pinot 3

1 citations · 44 references · arXiv

α

Published on arXiv

2510.00310

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Combining DeepSet architecture with adversarial training and robust aggregation surpasses existing methods by 4.7–22.2% accuracy points under Byzantine attacks across diverse benchmarks.

Robust DeepSet Aggregator

Novel technique introduced


Federated inference, in the form of one-shot federated learning, edge ensembles, or federated ensembles, has emerged as an attractive solution to combine predictions from multiple models. This paradigm enables each model to remain local and proprietary while a central server queries them and aggregates predictions. Yet, the robustness of federated inference has been largely neglected, leaving them vulnerable to even simple attacks. To address this critical gap, we formalize the problem of robust federated inference and provide the first robustness analysis of this class of methods. Our analysis of averaging-based aggregators shows that the error of the aggregator is small either when the dissimilarity between honest responses is small or the margin between the two most probable classes is large. Moving beyond linear averaging, we show that problem of robust federated inference with non-linear aggregators can be cast as an adversarial machine learning problem. We then introduce an advanced technique using the DeepSet aggregation model, proposing a novel composition of adversarial training and test-time robust aggregation to robustify non-linear aggregators. Our composition yields significant improvements, surpassing existing robust aggregation methods by 4.7 - 22.2% in accuracy points across diverse benchmarks.


Key Contributions

  • First formal robustness analysis of federated inference aggregators under Byzantine corruptions, deriving certificates for averaging-based schemes based on class margin and client dissimilarity
  • Reformulation of robust federated inference with non-linear aggregators as an adversarial ML problem over the probability simplex
  • Robust DeepSet aggregator combining permutation-invariant architecture, adversarial training, and test-time robust aggregation (CWTM), outperforming baselines by 4.7–22.2% accuracy points

🛡️ Threat Analysis

Data Poisoning Attack

The threat model is Byzantine malicious clients sending arbitrarily corrupted prediction vectors to degrade aggregator accuracy — a direct extension of Byzantine attacks in federated learning (ML02) applied to the inference phase. The paper builds explicitly on Byzantine-robust aggregation literature (CWTM, CWMed) and proposes defenses (robust aggregation + adversarial training) against this adversary.


Details

Domains
federated-learningvisionnlp
Model Types
federatedtransformer
Threat Tags
inference_timeblack_boxuntargeted
Datasets
CIFAR-10CIFAR-100HAG News
Applications
federated inferenceone-shot federated learningllm ensemblesedge ensembles