Detecting Deepfakes with Multivariate Soft Blending and CLIP-based Image-Text Alignment
Jingwei Li , Jiaxin Tong , Pengfei Wu
Published on arXiv
2602.15903
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves 3.32% Accuracy and 4.02% AUC improvement over the best baseline in-domain, and an average 3.27% AUC gain across five cross-domain datasets.
MSBA-CLIP
Novel technique introduced
The proliferation of highly realistic facial forgeries necessitates robust detection methods. However, existing approaches often suffer from limited accuracy and poor generalization due to significant distribution shifts among samples generated by diverse forgery techniques. To address these challenges, we propose a novel Multivariate and Soft Blending Augmentation with CLIP-guided Forgery Intensity Estimation (MSBA-CLIP) framework. Our method leverages the multimodal alignment capabilities of CLIP to capture subtle forgery traces. We introduce a Multivariate and Soft Blending Augmentation (MSBA) strategy that synthesizes images by blending forgeries from multiple methods with random weights, forcing the model to learn generalizable patterns. Furthermore, a dedicated Multivariate Forgery Intensity Estimation (MFIE) module is designed to explicitly guide the model in learning features related to varied forgery modes and intensities. Extensive experiments demonstrate state-of-the-art performance. On in-domain tests, our method improves Accuracy and AUC by 3.32\% and 4.02\%, respectively, over the best baseline. In cross-domain evaluations across five datasets, it achieves an average AUC gain of 3.27\%. Ablation studies confirm the efficacy of both proposed components. While the reliance on a large vision-language model entails higher computational cost, our work presents a significant step towards more generalizable and robust deepfake detection.
Key Contributions
- Multivariate and Soft Blending Augmentation (MSBA) that synthesizes training samples by randomly blending images from multiple forgery methods with soft weights, forcing the model to learn generalizable multi-forgery patterns.
- Multivariate Forgery Intensity Estimation (MFIE) module that explicitly guides the CLIP image encoder to learn features calibrated to varied forgery modes and manipulation intensities.
- Integration of CLIP's aligned multimodal representations into a face forgery detection framework, achieving state-of-the-art accuracy in both in-domain and cross-domain evaluations.
🛡️ Threat Analysis
The paper proposes a detection system for AI-generated/manipulated facial images (deepfakes), directly targeting the authenticity and integrity of visual content — a core ML09 use case explicitly listed as deepfake detection.