defense arXiv Mar 26, 2026 · 11d ago
Sahibzada Adil Shahzad, Ammarah Hashmi, Junichi Yamagishi et al. · National Institute of Informatics · Academia Sinica +2 more
Self-supervised multimodal deepfake detector trained on real videos, detecting visual tampering artifacts and audio-visual lip-sync inconsistencies
Output Integrity Attack multimodalvisionaudio
Multimodal deepfakes can exhibit subtle visual artifacts and cross-modal inconsistencies, which remain challenging to detect, especially when detectors are trained primarily on curated synthetic forgeries. Such synthetic dependence can introduce dataset and generator bias, limiting scalability and robustness to unseen manipulations. We propose SAVe, a self-supervised audio-visual deepfake detection framework that learns entirely on authentic videos. SAVe generates on-the-fly, identity-preserving, region-aware self-blended pseudo-manipulations to emulate tampering artifacts, enabling the model to learn complementary visual cues across multiple facial granularities. To capture cross-modal evidence, SAVe also models lip-speech synchronization via an audio-visual alignment component that detects temporal misalignment patterns characteristic of audio-visual forgeries. Experiments on FakeAVCeleb and AV-LipSync-TIMIT demonstrate competitive in-domain performance and strong cross-dataset generalization, highlighting self-supervised learning as a scalable paradigm for multimodal deepfake detection.
multimodal cnn transformer National Institute of Informatics · Academia Sinica · National Chengchi University +1 more