Unmasking Facial DeepFakes: A Robust Multiview Detection Framework for Natural Images
Sami Belguesmia , Mohand Saïd Allili , Assia Hamadene
Published on arXiv
2510.15576
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
The proposed multi-view fusion framework outperforms conventional single-view deepfake detection approaches on challenging datasets with pose variation and occlusion.
MultiView DeepFake Detection Framework
Novel technique introduced
DeepFake technology has advanced significantly in recent years, enabling the creation of highly realistic synthetic face images. Existing DeepFake detection methods often struggle with pose variations, occlusions, and artifacts that are difficult to detect in real-world conditions. To address these challenges, we propose a multi-view architecture that enhances DeepFake detection by analyzing facial features at multiple levels. Our approach integrates three specialized encoders, a global view encoder for detecting boundary inconsistencies, a middle view encoder for analyzing texture and color alignment, and a local view encoder for capturing distortions in expressive facial regions such as the eyes, nose, and mouth, where DeepFake artifacts frequently occur. Additionally, we incorporate a face orientation encoder, trained to classify face poses, ensuring robust detection across various viewing angles. By fusing features from these encoders, our model achieves superior performance in detecting manipulated images, even under challenging pose and lighting conditions.Experimental results on challenging datasets demonstrate the effectiveness of our method, outperforming conventional single-view approaches
Key Contributions
- Multi-view architecture combining global (boundary), middle (texture/color), and local (eyes/nose/mouth) view encoders to capture deepfake artifacts at multiple spatial scales
- Face orientation encoder that classifies head pose to enable robust deepfake detection under arbitrary viewing angles and real-world conditions
- Feature fusion strategy integrating all four encoder streams, outperforming single-view baselines on challenging deepfake datasets
🛡️ Threat Analysis
Directly addresses AI-generated content detection — specifically deepfake facial image detection. The paper proposes a novel detection architecture (multi-view encoders for boundary, texture, local facial regions, and orientation), which is a forensic method for verifying output integrity and provenance of synthetic faces.