Data Exfiltration by Compression Attack: Definition and Evaluation on Medical Image Data
Huiyu Li , Nicholas Ayache , Hervé Delingette
Published on arXiv
2511.21227
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
The DEC attack successfully reconstructs medical training images with high fidelity from exported model weights alone, and can be made resilient to Gaussian differential privacy noise at the cost of reducing the number of recoverable images.
Data Exfiltration by Compression (DEC)
Novel technique introduced
With the rapid expansion of data lakes storing health data and hosting AI algorithms, a prominent concern arises: how safe is it to export machine learning models from these data lakes? In particular, deep network models, widely used for health data processing, encode information from their training dataset, potentially leading to the leakage of sensitive information upon its export. This paper thoroughly examines this issue in the context of medical imaging data and introduces a novel data exfiltration attack based on image compression techniques. This attack, termed Data Exfiltration by Compression, requires only access to a data lake and is based on lossless or lossy image compression methods. Unlike previous data exfiltration attacks, it is compatible with any image processing task and depends solely on an exported network model without requiring any additional information to be collected during the training process. We explore various scenarios, and techniques to limit the size of the exported model and conceal the compression codes within the network. Using two public datasets of CT and MR images, we demonstrate that this attack can effectively steal medical images and reconstruct them outside the data lake with high fidelity, achieving an optimal balance between compression and reconstruction quality. Additionally, we investigate the impact of basic differential privacy measures, such as adding Gaussian noise to the model parameters, to prevent the Data Exfiltration by Compression Attack. We also show how the attacker can make their attack resilient to differential privacy at the expense of decreasing the number of stolen images. Lastly, we propose an alternative prevention strategy by fine-tuning the model to be exported.
Key Contributions
- Introduces the Data Exfiltration by Compression (DEC) attack, which hides losslessly/lossily compressed medical image codes inside exported neural network weights, enabling full training data reconstruction outside a secure data lake
- Demonstrates the attack on CT and MR imaging datasets with high reconstruction fidelity, and shows it is task-agnostic (compatible with any image processing network including U-nets with skip connections)
- Evaluates Gaussian-noise differential privacy as a countermeasure and shows attackers can adapt to remain resilient at the cost of fewer stolen images; proposes fine-tuning as an alternative prevention strategy
🛡️ Threat Analysis
The DEC attack's primary goal is recovering private training data from an exported ML model. Although the mechanism differs from classical inference-based model inversion (the adversary actively encodes compressed images into model weights before export rather than passively inferring data from outputs), the adversarial threat model is identical to ML03: an insider extracts and reconstructs the full training dataset through a model export channel. The paper also evaluates DP defenses (Gaussian noise to parameters) and fine-tuning as countermeasures against this data reconstruction threat.