attack 2025

GUIDE: Enhancing Gradient Inversion Attacks in Federated Learning with Denoising Models

Vincenzo Carletti , Pasquale Foggia , Carlo Mazzocca , Giuseppe Parrella , Mario Vento

1 citations · 49 references · arXiv

α

Published on arXiv

2510.17621

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

GUIDE achieves up to 46% higher perceptual similarity (DreamSim) when reconstructing private training images from FL gradients compared to state-of-the-art gradient inversion baselines.

GUIDE (Gradient Update Inversion with DEnoising)

Novel technique introduced


Federated Learning (FL) enables collaborative training of Machine Learning (ML) models across multiple clients while preserving their privacy. Rather than sharing raw data, federated clients transmit locally computed updates to train the global model. Although this paradigm should provide stronger privacy guarantees than centralized ML, client updates remain vulnerable to privacy leakage. Adversaries can exploit them to infer sensitive properties about the training data or even to reconstruct the original inputs via Gradient Inversion Attacks (GIAs). Under the honest-butcurious threat model, GIAs attempt to reconstruct training data by reversing intermediate updates using optimizationbased techniques. We observe that these approaches usually reconstruct noisy approximations of the original inputs, whose quality can be enhanced with specialized denoising models. This paper presents Gradient Update Inversion with DEnoising (GUIDE), a novel methodology that leverages diffusion models as denoising tools to improve image reconstruction attacks in FL. GUIDE can be integrated into any GIAs that exploits surrogate datasets, a widely adopted assumption in GIAs literature. We comprehensively evaluate our approach in two attack scenarios that use different FL algorithms, models, and datasets. Our results demonstrate that GUIDE integrates seamlessly with two state-ofthe- art GIAs, substantially improving reconstruction quality across multiple metrics. Specifically, GUIDE achieves up to 46% higher perceptual similarity, as measured by the DreamSim metric.


Key Contributions

  • GUIDE methodology that integrates diffusion models as post-processing denoisers to enhance image reconstruction quality from gradient inversion attacks in FL
  • Demonstration that GUIDE is modular and integrates with any surrogate-dataset-based GIA, evaluated across two FL algorithms and multiple model/dataset configurations
  • Up to 46% improvement in perceptual similarity (DreamSim) over state-of-the-art gradient inversion baselines

🛡️ Threat Analysis

Model Inversion Attack

GUIDE is a gradient inversion attack in federated learning where an honest-but-curious adversary reconstructs private client training data from observed gradient updates — the canonical ML03 threat (gradient leakage / data reconstruction from model updates).


Details

Domains
visionfederated-learning
Model Types
diffusionfederatedcnn
Threat Tags
white_boxtraining_time
Applications
federated learningimage reconstructionprivacy-preserving ml