defense 2025

Make Identity Unextractable yet Perceptible: Synthesis-Based Privacy Protection for Subject Faces in Photos

Tao Wang 1, Yushu Zhang 2, Xiangli Xiao 2, Kun Xu 1, Lin Yuan 3, Wenying Wen 2, Yuming Fang 2

0 citations

α

Published on arXiv

2509.11249

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

PerceptFace achieves a superior trade-off between identity protection (blocking unauthorized FR systems) and identity perception (human visual recognizability) compared to perturbation-based methods (unreliable) and prior synthesis-based methods (low perceptual similarity).

PerceptFace

Novel technique introduced


Deep learning-based face recognition (FR) technology exacerbates privacy concerns in photo sharing. In response, the research community developed a suite of anti-FR methods to block identity extraction by unauthorized FR systems. Benefiting from quasi-imperceptible alteration, perturbation-based methods are well-suited for privacy protection of subject faces in photos, as they allow familiar persons to recognize subjects via naked eyes. However, we reveal that perturbation-based methods provide a false sense of privacy through theoretical analysis and experimental validation. Therefore, new alternative solutions should be found to protect subject faces. In this paper, we explore synthesis-based methods as a promising solution, whose challenge is to enable familiar persons to recognize subjects. To solve the challenge, we present a key insight: In most photo sharing scenarios, familiar persons recognize subjects through identity perception rather than meticulous face analysis. Based on the insight, we propose the first synthesis-based method dedicated to subject faces, i.e., PerceptFace, which can make identity unextractable yet perceptible. To enhance identity perception, a new perceptual similarity loss is designed for faces, reducing the alteration in regions of high sensitivity to human vision. As a synthesis-based method, PerceptFace can inherently provide reliable identity protection. Meanwhile, out of the confine of meticulous face analysis, PerceptFace focuses on identity perception from a more practical scenario, which is also enhanced by the designed perceptual similarity loss. Sufficient experiments show that PerceptFace achieves a superior trade-off between identity protection and identity perception compared to existing methods. We provide a public API of PerceptFace and believe that it has great potential to become a practical anti-FR tool.


Key Contributions

  • Theoretically and empirically demonstrates that perturbation-based anti-FR methods provide only a false sense of privacy due to the exploitable vulnerabilities they rely on being patchable by evolving FR systems
  • Proposes PerceptFace, the first synthesis-based anti-FR method specifically designed for subject faces, making identity unextractable by FR systems while remaining perceptible to familiar human observers
  • Introduces a perceptual similarity loss tailored for faces that reduces alteration in human-vision-sensitive regions, enhancing identity perception in the synthesized output

🛡️ Threat Analysis

Input Manipulation Attack

PerceptFace crafts face images (via synthesis) that cause face recognition models to fail at identity extraction at inference time — a targeted evasion attack against FR systems. The paper also theoretically validates why perturbation-based adversarial approaches (Fawkes, ATM-GAN) provide false security, with the entire paper framed around defeating ML-based FR at inference. Both the analyzed perturbation-based methods and the proposed synthesis-based method are inference-time input manipulations against FR models.


Details

Domains
visiongenerative
Model Types
cnngan
Threat Tags
black_boxinference_timetargeted
Datasets
LFWCelebAVGGFace2
Applications
facial recognitionphoto sharing privacybiometric privacy protection