attack 2025

Generative Model Inversion Through the Lens of the Manifold Hypothesis

Xiong Peng 1, Bo Han 1, Fengfei Yu 1, Tongliang Liu 2, Feng Liu 3, Mingyuan Zhou 4

0 citations · 50 references · arXiv

α

Published on arXiv

2509.20177

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

A training objective explicitly promoting gradient-manifold alignment increases model vulnerability to MIAs, and a training-free inversion enhancement achieves consistent improvements over state-of-the-art generative MIAs


Model inversion attacks (MIAs) aim to reconstruct class-representative samples from trained models. Recent generative MIAs utilize generative adversarial networks to learn image priors that guide the inversion process, yielding reconstructions with high visual quality and strong fidelity to the private training data. To explore the reason behind their effectiveness, we begin by examining the gradients of inversion loss with respect to synthetic inputs, and find that these gradients are surprisingly noisy. Further analysis reveals that generative inversion implicitly denoises these gradients by projecting them onto the tangent space of the generator manifold, filtering out off-manifold components while preserving informative directions aligned with the manifold. Our empirical measurements show that, in models trained with standard supervision, loss gradients often exhibit large angular deviations from the data manifold, indicating poor alignment with class-relevant directions. This observation motivates our central hypothesis: models become more vulnerable to MIAs when their loss gradients align more closely with the generator manifold. We validate this hypothesis by designing a novel training objective that explicitly promotes such alignment. Building on this insight, we further introduce a training-free approach to enhance gradient-manifold alignment during inversion, leading to consistent improvements over state-of-the-art generative MIAs.


Key Contributions

  • Theoretical insight that generative MIAs implicitly denoise noisy inversion gradients by projecting them onto the GAN generator manifold's tangent space
  • Empirical validation of the hypothesis that models become more vulnerable to MIAs when their loss gradients align closely with the generator manifold
  • A novel training objective promoting gradient-manifold alignment and a complementary training-free inversion enhancement, both yielding consistent improvements over state-of-the-art generative MIAs

🛡️ Threat Analysis

Model Inversion Attack

The paper directly studies model inversion attacks — adversaries reconstructing class-representative private training data from trained models. It analyzes the mechanism behind generative MIAs, proposes a training objective that increases model vulnerability to inversion, and introduces a training-free method to improve attack effectiveness, all within a concrete data-reconstruction threat model.


Details

Domains
visiongenerative
Model Types
gancnn
Threat Tags
white_boxtraining_timeinference_timetargeted
Applications
image classificationface recognition