Scalable Face Security Vision Foundation Model for Deepfake, Diffusion, and Spoofing Detection
Gaojian Wang 1, Feng Lin 1, Tong Wu 1, Zhisheng Yan 2, Kui Ren 1
Published on arXiv
2510.10663
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
FS-VFM outperforms diverse vision foundation models and even SOTA task-specific methods across 11 benchmarks spanning deepfake detection, face anti-spoofing, and diffusion face forensics
FS-VFM (3C objectives + CRFR-P masking + FS-Adapter)
Novel technique introduced
With abundant, unlabeled real faces, how can we learn robust and transferable facial representations to boost generalization across various face security tasks? We make the first attempt and propose FS-VFM, a scalable self-supervised pre-training framework, to learn fundamental representations of real face images. We introduce three learning objectives, namely 3C, that synergize masked image modeling (MIM) and instance discrimination (ID), empowering FS-VFM to encode both local patterns and global semantics of real faces. Specifically, we formulate various facial masking strategies for MIM and devise a simple yet effective CRFR-P masking, which explicitly prompts the model to pursue meaningful intra-region Consistency and challenging inter-region Coherency. We present a reliable self-distillation mechanism that seamlessly couples MIM with ID to establish underlying local-to-global Correspondence. After pre-training, vanilla vision transformers (ViTs) serve as universal Vision Foundation Models for downstream Face Security tasks: cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen diffusion facial forensics. To efficiently transfer the pre-trained FS-VFM, we further propose FS-Adapter, a lightweight plug-and-play bottleneck atop the frozen backbone with a novel real-anchor contrastive objective. Extensive experiments on 11 public benchmarks demonstrate that our FS-VFM consistently generalizes better than diverse VFMs, spanning natural and facial domains, fully, weakly, and self-supervised paradigms, small, base, and large ViT scales, and even outperforms SOTA task-specific methods, while FS-Adapter offers an excellent efficiency-performance trade-off. The code and models are available on https://fsfm-3c.github.io/fsvfm.html.
Key Contributions
- FS-VFM: a scalable self-supervised pre-training framework using three learning objectives (3C) synergizing masked image modeling and instance discrimination to learn generalizable real-face representations
- CRFR-P masking strategy that explicitly enforces intra-region consistency and inter-region coherency for stronger facial representation learning
- FS-Adapter: a lightweight plug-and-play adapter with real-anchor contrastive objective enabling efficient transfer to deepfake detection, face anti-spoofing, and diffusion facial forensics tasks
🛡️ Threat Analysis
Primary contribution is detecting AI-generated face content (deepfakes, diffusion-synthesized faces) — this is AI-generated content detection (output integrity), with novel pre-training methodology (3C objectives, CRFR-P masking, self-distillation) rather than mere application of existing detectors.