CLIP-FTI: Fine-Grained Face Template Inversion via CLIP-Driven Attribute Conditioning
Longchen Dai , Zixuan Shen , Zhiheng Zhou , Peipeng Yu , Zhihua Xia
Published on arXiv
2512.15433
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
CLIP-FTI achieves state-of-the-art face template inversion with higher identification accuracy, sharper facial attribute semantics, and improved cross-model transferability compared to prior reconstruction attacks across multiple face recognition backbones.
CLIP-FTI
Novel technique introduced
Face recognition systems store face templates for efficient matching. Once leaked, these templates pose a threat: inverting them can yield photorealistic surrogates that compromise privacy and enable impersonation. Although existing research has achieved relatively realistic face template inversion, the reconstructed facial images exhibit over-smoothed facial-part attributes (eyes, nose, mouth) and limited transferability. To address this problem, we present CLIP-FTI, a CLIP-driven fine-grained attribute conditioning framework for face template inversion. Our core idea is to use the CLIP model to obtain the semantic embeddings of facial features, in order to realize the reconstruction of specific facial feature attributes. Specifically, facial feature attribute embeddings extracted from CLIP are fused with the leaked template via a cross-modal feature interaction network and projected into the intermediate latent space of a pretrained StyleGAN. The StyleGAN generator then synthesizes face images with the same identity as the templates but with more fine-grained facial feature attributes. Experiments across multiple face recognition backbones and datasets show that our reconstructions (i) achieve higher identification accuracy and attribute similarity, (ii) recover sharper component-level attribute semantics, and (iii) improve cross-model attack transferability compared to prior reconstruction attacks. To the best of our knowledge, ours is the first method to use additional information besides the face template attack to realize face template inversion and obtains SOTA results.
Key Contributions
- First face template inversion method to leverage CLIP-derived semantic attribute embeddings to recover fine-grained facial feature attributes (eyes, nose, mouth) that prior methods over-smooth
- Cross-modal feature interaction network that fuses leaked templates with CLIP attribute embeddings and maps the result to StyleGAN's intermediate latent (W) space via a template-to-attribute alignment adapter
- Demonstrates improved identification accuracy, attribute similarity, and cross-model attack transferability over state-of-the-art inversion baselines across multiple FR backbones and datasets
🛡️ Threat Analysis
CLIP-FTI reconstructs private face images from leaked face recognition embeddings (templates), which are model-produced representations of enrolled individuals. This is a direct model inversion attack: an adversary with access to stored templates reconstructs the original biometric data, violating privacy and enabling impersonation.