attack 2026

Reconstructing Protected Biometric Templates from Binary Authentication Results

Eliron Rahimi , Margarita Osadchy , Orr Dunkelman

0 citations · 37 references · IJCB

α

Published on arXiv

2601.17620

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Attack reconstructs biometric templates from binary authentication outputs with negligible loss and recovered facial images pass the protected system more than 98% of the time.

Binary Oracle Template Reconstruction

Novel technique introduced


Biometric data is considered to be very private and highly sensitive. As such, many methods for biometric template protection were considered over the years -- from biohashing and specially crafted feature extraction procedures, to the use of cryptographic solutions such as Fuzzy Commitments or the use of Fully Homomorphic Encryption (FHE). A key question that arises is how much protection these solutions can offer when the adversary can inject samples, and observe the outputs of the system. While for systems that return the similarity score, one can use attacks such as hill-climbing, for systems where the adversary can only learn whether the authentication attempt was successful, this question remained open. In this paper, we show that it is indeed possible to reconstruct the biometric template by just observing the success/failure of the authentication attempt (given the ability to inject a sufficient amount of templates). Our attack achieves negligible template reconstruction loss and enables full recovery of facial images through a generative inversion method, forming a pipeline from binary scores to high-resolution facial images that successfully pass the system more than 98\% of the time. Our results, of course, are applicable for any protection mechanism that maintains the accuracy of the recognition.


Key Contributions

  • First attack to reconstruct biometric templates using only binary (success/failure) authentication signals, closing an open problem for binary-output systems that was previously only solved for score-returning systems
  • End-to-end pipeline from binary oracle queries to high-resolution facial image recovery via generative inversion, achieving negligible template reconstruction loss
  • Demonstrated applicability across multiple protection mechanisms (FHE, biohashing, feature-transform-based schemes) with >98% impersonation success rate

🛡️ Threat Analysis

Model Inversion Attack

The adversary reconstructs private biometric templates (deep-learning-generated embeddings from ArcFace/FaceNet) by querying a biometric authentication system and observing only binary success/failure outputs — a model inversion attack on ML-generated embeddings. The attack then applies generative inversion to recover full facial images from the reconstructed templates, achieving >98% pass rate against protected systems.


Details

Domains
vision
Model Types
cnngan
Threat Tags
black_boxinference_timetargeted
Datasets
LFWArcFace templatesFaceNet templates
Applications
biometric authenticationfacial recognitiontemplate protection systems