defense 2025

InfoDecom: Decomposing Information for Defending Against Privacy Leakage in Split Inference

Ruijun Deng 1, Zhihui Lu 1, Qiang Duan 2

0 citations · 40 references · arXiv

α

Published on arXiv

2511.13365

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

InfoDecom achieves a superior utility-privacy trade-off compared to existing defenses in shallow-client split inference by first removing redundant information before applying noise, requiring less perturbation for the same theoretical privacy guarantee.

InfoDecom

Novel technique introduced


Split inference (SI) enables users to access deep learning (DL) services without directly transmitting raw data. However, recent studies reveal that data reconstruction attacks (DRAs) can recover the original inputs from the smashed data sent from the client to the server, leading to significant privacy leakage. While various defenses have been proposed, they often result in substantial utility degradation, particularly when the client-side model is shallow. We identify a key cause of this trade-off: existing defenses apply excessive perturbation to redundant information in the smashed data. To address this issue in computer vision tasks, we propose InfoDecom, a defense framework that first decomposes and removes redundant information and then injects noise calibrated to provide theoretically guaranteed privacy. Experiments demonstrate that InfoDecom achieves a superior utility-privacy trade-off compared to existing baselines.


Key Contributions

  • InfoDecom defense framework that decomposes and removes task-redundant information from smashed data (via frequency-domain filtering and Information Bottleneck regularization) before applying calibrated Gaussian noise for theoretically guaranteed privacy.
  • Identifies excessive perturbation of redundant information as the root cause of poor utility-privacy trade-offs in existing defenses for shallow client-side models in split inference.
  • Demonstrates superior utility-privacy trade-off over SOTA defenses across multiple vision benchmarks, especially under shallow client-model settings.

🛡️ Threat Analysis

Model Inversion Attack

The explicit threat model is a malicious server adversary who reconstructs the original input (raw data) from smashed data (intermediate model representations) transmitted during split inference — a classic model inversion / data reconstruction attack. InfoDecom is a defense that reduces reconstructable information in smashed data and applies theoretically guaranteed noise to defeat these DRAs.


Details

Domains
vision
Model Types
cnn
Threat Tags
black_boxinference_time
Applications
split inferenceimage classificationcollaborative inference / mlaas