attack 2025

Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization

Yixiang Qiu 1, Yanhan Liu 1, Hongyao Yu 1, Hao Fang 1, Bin Chen 2, Shu-Tao Xia 1, Ke Xu 1

0 citations

α

Published on arXiv

2508.20613

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

The proposed GAN-based PFO attack significantly outperforms prior data reconstruction attacks, with particular gains in high-resolution and out-of-distribution settings against deep and complex DNNs in split inference systems.

PFO (Progressive Feature Optimization)

Novel technique introduced


The growing complexity of Deep Neural Networks (DNNs) has led to the adoption of Split Inference (SI), a collaborative paradigm that partitions computation between edge devices and the cloud to reduce latency and protect user privacy. However, recent advances in Data Reconstruction Attacks (DRAs) reveal that intermediate features exchanged in SI can be exploited to recover sensitive input data, posing significant privacy risks. Existing DRAs are typically effective only on shallow models and fail to fully leverage semantic priors, limiting their reconstruction quality and generalizability across datasets and model architectures. In this paper, we propose a novel GAN-based DRA framework with Progressive Feature Optimization (PFO), which decomposes the generator into hierarchical blocks and incrementally refines intermediate representations to enhance the semantic fidelity of reconstructed images. To stabilize the optimization and improve image realism, we introduce an L1-ball constraint during reconstruction. Extensive experiments show that our method outperforms prior attacks by a large margin, especially in high-resolution scenarios, out-of-distribution settings, and against deeper and more complex DNNs.


Key Contributions

  • GAN-based Data Reconstruction Attack framework with Progressive Feature Optimization (PFO) that decomposes the generator into hierarchical blocks to incrementally refine intermediate representations for semantically faithful image reconstruction
  • Introduction of an L1-ball constraint during optimization to stabilize the reconstruction process and improve image realism
  • Demonstrated superior attack performance over prior DRAs in high-resolution scenarios, out-of-distribution settings, and against deeper, more complex DNN architectures

🛡️ Threat Analysis

Model Inversion Attack

The paper is fundamentally a data reconstruction attack where an adversary (e.g., the cloud server or eavesdropper) intercepts intermediate 'smashed' feature representations in split inference and reconstructs the original private input data. This is a model inversion attack — the adversary reverse-engineers the partial computation to recover training/inference data. The PFO framework directly targets this privacy threat.


Details

Domains
vision
Model Types
cnngan
Threat Tags
grey_boxinference_time
Applications
split inference systemscollaborative edge-cloud inferenceimage classification