attack 2026

ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery

Zirui Gong 1, Leo Yu Zhang 1, Yanjun Zhang 1, Viet Vo 2, Tianqing Zhu 3, Shirui Pan 1, Cong Wang 4

0 citations

α

Published on arXiv

2603.17623

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Achieves high-fidelity training data reconstruction from large batch sizes in realistic FL settings, significantly outperforming prior gradient inversion attacks

ARES (Activation REcovery via Sparse inversion)

Novel technique introduced


Federated Learning (FL) enables collaborative model training by sharing model updates instead of raw data, aiming to protect user privacy. However, recent studies reveal that these shared updates can inadvertently leak sensitive training data through gradient inversion attacks (GIAs). Among them, active GIAs are particularly powerful, enabling high-fidelity reconstruction of individual samples even under large batch sizes. Nevertheless, existing approaches often require architectural modifications, which limit their practical applicability. In this work, we bridge this gap by introducing the Activation REcovery via Sparse inversion (ARES) attack, an active GIA designed to reconstruct training samples from large training batches without requiring architectural modifications. Specifically, we formulate the recovery problem as a noisy sparse recovery task and solve it using the generalized Least Absolute Shrinkage and Selection Operator (Lasso). To extend the attack to multi-sample recovery, ARES incorporates the imprint method to disentangle activations, enabling scalable per-sample reconstruction. We further establish the expected recovery rate and derive an upper bound on the reconstruction error, providing theoretical guarantees for the ARES attack. Extensive experiments on CNNs and MLPs demonstrate that ARES achieves high-fidelity reconstruction across diverse datasets, significantly outperforming prior GIAs under large batch sizes and realistic FL settings. Our results highlight that intermediate activations pose a serious and underestimated privacy risk in FL, underscoring the urgent need for stronger defenses.


Key Contributions

  • Formulates activation recovery as a noisy sparse recovery problem solved via generalized Lasso
  • Achieves high-fidelity reconstruction from large batches without architectural modifications
  • Provides theoretical guarantees: expected recovery rate and upper bound on reconstruction error

🛡️ Threat Analysis

Model Inversion Attack

ARES reconstructs private training data (images, samples) from shared gradients in federated learning — this is a model inversion / data reconstruction attack where the adversary (malicious server) recovers training samples from model updates.


Details

Domains
visionfederated-learning
Model Types
cnntraditional_mlfederated
Threat Tags
training_timewhite_box
Datasets
CIFAR-10ImageNet
Applications
federated learningimage classification