attack 2025

MAGIA: Sensing Per-Image Signals from Single-Round Averaged Gradients for Label-Inference-Free Gradient Inversion

Zhanting Zhou , Jinbo Wang , Zeqin Wu , Fengli Zhang

0 citations · 21 references · arXiv

α

Published on arXiv

2509.18170

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

MAGIA achieves high-fidelity multi-image reconstruction in large-batch federated learning scenarios where prior gradient inversion methods fail, without requiring label inference or auxiliary information

MAGIA

Novel technique introduced


We study gradient inversion in the challenging single round averaged gradient SAG regime where per sample cues are entangled within a single batch mean gradient. We introduce MAGIA a momentum based adaptive correction on gradient inversion attack a novel label inference free framework that senses latent per image signals by probing random data subsets. MAGIA objective integrates two core innovations 1 a closed form combinatorial rescaling that creates a provably tighter optimization bound and 2 a momentum based mixing of whole batch and subset losses to ensure reconstruction robustness. Extensive experiments demonstrate that MAGIA significantly outperforms advanced methods achieving high fidelity multi image reconstruction in large batch scenarios where prior works fail. This is all accomplished with a computational footprint comparable to standard solvers and without requiring any auxiliary information.


Key Contributions

  • Label-inference-free gradient inversion framework using random subset probing to sense per-image signals from batch-averaged gradients
  • Closed-form combinatorial rescaling that creates a provably tighter optimization bound consistent across subset sizes
  • Momentum-based mixing of whole-batch and subset losses for robust multi-image reconstruction in large-batch SAG regimes

🛡️ Threat Analysis

Model Inversion Attack

MAGIA is a gradient inversion attack where an honest-but-curious server reconstructs private client training images from observed batch-mean gradients — the core ML03 threat of adversarially recovering training data from model updates in federated learning.


Details

Domains
visionfederated-learning
Model Types
cnnfederated
Threat Tags
white_boxtraining_timetargeted
Applications
federated learningimage reconstruction from gradients