Unleashing Vision-Language Semantics for Deepfake Video Detection
Jiawen Zhu 1, Yunqi Miao 2, Xueyi Zhang 3, Jiankang Deng 4, Guansong Pang 4
Published on arXiv
2603.24454
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Substantially outperforms state-of-the-art deepfake detection methods at both frame and video levels across 9 diverse benchmarks including face-swapping and full-face generation forgeries
VLAForge
Novel technique introduced
Recent Deepfake Video Detection (DFD) studies have demonstrated that pre-trained Vision-Language Models (VLMs) such as CLIP exhibit strong generalization capabilities in detecting artifacts across different identities. However, existing approaches focus on leveraging visual features only, overlooking their most distinctive strength -- the rich vision-language semantics embedded in the latent space. We propose VLAForge, a novel DFD framework that unleashes the potential of such cross-modal semantics to enhance model's discriminability in deepfake detection. This work i) enhances the visual perception of VLM through a ForgePerceiver, which acts as an independent learner to capture diverse, subtle forgery cues both granularly and holistically, while preserving the pretrained Vision-Language Alignment (VLA) knowledge, and ii) provides a complementary discriminative cue -- Identity-Aware VLA score, derived by coupling cross-modal semantics with the forgery cues learned by ForgePerceiver. Notably, the VLA score is augmented by an identity prior-informed text prompting to capture authenticity cues tailored to each identity, thereby enabling more discriminative cross-modal semantics. Comprehensive experiments on video DFD benchmarks, including classical face-swapping forgeries and recent full-face generation forgeries, demonstrate that our VLAForge substantially outperforms state-of-the-art methods at both frame and video levels. Code is available at https://github.com/mala-lab/VLAForge.
Key Contributions
- ForgePerceiver module that captures diverse forgery cues while preserving pretrained VLM knowledge through learnable forgery-aware masks and localization maps
- Identity-Aware VLA scoring mechanism that leverages cross-modal semantics with identity-informed text prompts for fine-grained authenticity detection
- Comprehensive evaluation demonstrating SOTA performance on 9 DFD benchmarks covering face-swapping and full-face generation forgeries
🛡️ Threat Analysis
Primary contribution is detecting AI-generated/manipulated video content (deepfakes). The paper focuses on authenticating visual outputs and detecting synthetic facial forgeries, which falls under output integrity and AI-generated content detection.