Model Inversion Attack Against Deep Hashing
Dongdong Zhao , Qiben Xu , Ranxin Fang , Baogang Song
Published on arXiv
2511.12233
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
DHMI outperforms state-of-the-art model inversion attacks in black-box scenarios, successfully reconstructing high-resolution, semantically consistent images from deep hashing models without access to training hash codes.
DHMI
Novel technique introduced
Deep hashing improves retrieval efficiency through compact binary codes, yet it introduces severe and often overlooked privacy risks. The ability to reconstruct original training data from hash codes could lead to serious threats such as biometric forgery and privacy breaches. However, model inversion attacks specifically targeting deep hashing models remain unexplored, leaving their security implications unexamined. This research gap stems from the inaccessibility of genuine training hash codes and the highly discrete Hamming space, which prevents existing methods from adapting to deep hashing. To address these challenges, we propose DHMI, the first diffusion-based model inversion framework designed for deep hashing. DHMI first clusters an auxiliary dataset to derive semantic hash centers as surrogate anchors. It then introduces a surrogate-guided denoising optimization method that leverages a novel attack metric (fusing classification consistency and hash proximity) to dynamically select candidate samples. A cluster of surrogate models guides the refinement of these candidates, ensuring the generation of high-fidelity and semantically consistent images. Experiments on multiple datasets demonstrate that DHMI successfully reconstructs high-resolution, high-quality images even under the most challenging black-box setting, where no training hash codes are available. Our method outperforms the existing state-of-the-art model inversion attacks in black-box scenarios, confirming both its practical efficacy and the critical privacy risks inherent in deep hashing systems.
Key Contributions
- DHMI: the first diffusion-based model inversion framework targeting deep hashing models in a strict black-box setting with no training hash code access
- Surrogate hash center derivation from auxiliary datasets to serve as semantic anchors, bypassing the inaccessibility of genuine training hash codes
- Surrogate-guided denoising optimization with a novel attack metric fusing classification consistency and hash proximity for high-fidelity, semantically consistent image reconstruction
🛡️ Threat Analysis
DHMI is explicitly a model inversion attack: an adversary reconstructs high-fidelity training images from deep hashing model outputs (binary hash codes) without access to the original training data, gradients, or hash codes — a canonical training data reconstruction threat.