defense 2026

ERIS: Enhancing Privacy and Communication Efficiency in Serverless Federated Learning

Dario Fenoglio , Pasquale Polverino , Jacopo Quizi , Martin Gjoreski , Marc Langheinrich

0 citations · 98 references · arXiv (Cornell University)

α

Published on arXiv

2602.08617

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

ERIS reduces membership inference attack success from ~83% to ~65% and degrades data reconstruction to random-level quality while achieving FedAvg-level accuracy and over 94% communication cost reduction.

ERIS

Novel technique introduced


Scaling federated learning (FL) to billion-parameter models introduces critical trade-offs between communication efficiency, model accuracy, and privacy guarantees. Existing solutions often tackle these challenges in isolation, sacrificing accuracy or relying on costly cryptographic tools. We propose ERIS, a serverless FL framework that balances privacy and accuracy while eliminating the server bottleneck and distributing the communication load. ERIS combines a model partitioning strategy, distributing aggregation across multiple client-side aggregators, with a distributed shifted gradient compression mechanism. We theoretically prove that ERIS (i) converges at the same rate as FedAvg under standard assumptions, and (ii) bounds mutual information leakage inversely with the number of aggregators, enabling strong privacy guarantees with no accuracy degradation. Experiments across image and text tasks, including large language models, confirm that ERIS achieves FedAvg-level accuracy while substantially reducing communication cost and improving robustness to membership inference and reconstruction attacks, without relying on heavy cryptography or noise injection.


Key Contributions

  • Serverless FL framework (ERIS) that distributes aggregation across multiple client-side aggregators using model partitioning, eliminating the central server bottleneck
  • Theoretical proof that mutual information leakage is bounded inversely with the number of aggregators, providing privacy guarantees without noise injection or heavy cryptography
  • Distributed shifted gradient compression mechanism that reduces communication cost by over 94% while maintaining FedAvg-level accuracy across image and LLM tasks

🛡️ Threat Analysis

Model Inversion Attack

ERIS is also evaluated against training data reconstruction attacks in the FL gradient-sharing setting, with the model partitioning and distributed aggregation strategy reducing reconstruction quality to random-level — directly defending against gradient inversion/model inversion in FL.

Membership Inference Attack

Primary privacy evaluation explicitly targets membership inference attacks — ERIS reduces MIA success rate from ~83% to ~65% (near the theoretical ~64% lower bound), and the framework is designed with MIA robustness as a stated goal.


Details

Domains
federated-learningvisionnlp
Model Types
federatedllmtransformer
Threat Tags
training_timeblack_box
Applications
federated learninglarge language model trainingprivacy-preserving distributed training