attack 2026

Realistic Face Reconstruction from Facial Embeddings via Diffusion Models

Dong Han 1,2, Yong Li 1, Joachim Denzler 2

0 citations · 42 references · arXiv (Cornell University)

α

Published on arXiv

2602.13168

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Reconstructed faces from facial embeddings — including those from PPFR systems — successfully bypass real-world face recognition systems, demonstrating that embedding-level privacy protections are insufficient against inversion attacks.

FEM (Face Embedding Mapping)

Novel technique introduced


With the advancement of face recognition (FR) systems, privacy-preserving face recognition (PPFR) systems have gained popularity for their accurate recognition, enhanced facial privacy protection, and robustness to various attacks. However, there are limited studies to further verify privacy risks by reconstructing realistic high-resolution face images from embeddings of these systems, especially for PPFR. In this work, we propose the face embedding mapping (FEM), a general framework that explores Kolmogorov-Arnold Network (KAN) for conducting the embedding-to-face attack by leveraging pre-trained Identity-Preserving diffusion model against state-of-the-art (SOTA) FR and PPFR systems. Based on extensive experiments, we verify that reconstructed faces can be used for accessing other real-word FR systems. Besides, the proposed method shows the robustness in reconstructing faces from the partial and protected face embeddings. Moreover, FEM can be utilized as a tool for evaluating safety of FR and PPFR systems in terms of privacy leakage. All images used in this work are from public datasets.


Key Contributions

  • FEM framework using Kolmogorov-Arnold Networks (KAN) to map facial embeddings into the latent space of a pre-trained Identity-Preserving diffusion model for high-resolution face reconstruction
  • Demonstrates the attack generalizes to partial and protected embeddings from privacy-preserving face recognition (PPFR) systems, showing PPFR does not fully mitigate reconstruction risk
  • Shows reconstructed faces successfully impersonate identities on real-world FR systems, empirically quantifying privacy leakage in both standard and PPFR pipelines

🛡️ Threat Analysis

Model Inversion Attack

The paper's core contribution is embedding inversion: an adversary with access to facial embeddings (including partial or protected ones from PPFR systems) reconstructs the original face images. This matches the ML03 definition of 'recovering private data from model outputs/embeddings' — the adversary actively reconstructs biometric data from the model's internal representations.


Details

Domains
vision
Model Types
diffusiontraditional_ml
Threat Tags
black_boxinference_timetargeted
Datasets
public face datasets (unspecified in available source)
Applications
face recognitionprivacy-preserving face recognition