defense 2025

UMCL: Unimodal-generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection

Ching-Yi Lai 1, Chih-Yu Jian 2, Pei-Cheng Chuang 1, Chia-Ming Lee 2, Chih-Chung Hsu 2,3, Chiou-Ting Hsu 1, Chia-Wen Lin 1

0 citations · 66 references · International Journal of Compu...

α

Published on arXiv

2511.18983

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves superior deepfake detection performance across various compression rates and manipulation types, maintaining high accuracy even when individual feature modalities degrade due to compression

UMCL (Unimodal-generated Multimodal Contrastive Learning)

Novel technique introduced


In deepfake detection, the varying degrees of compression employed by social media platforms pose significant challenges for model generalization and reliability. Although existing methods have progressed from single-modal to multimodal approaches, they face critical limitations: single-modal methods struggle with feature degradation under data compression in social media streaming, while multimodal approaches require expensive data collection and labeling and suffer from inconsistent modal quality or accessibility in real-world scenarios. To address these challenges, we propose a novel Unimodal-generated Multimodal Contrastive Learning (UMCL) framework for robust cross-compression-rate (CCR) deepfake detection. In the training stage, our approach transforms a single visual modality into three complementary features: compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings from pre-trained vision-language models. These features are explicitly aligned through an affinity-driven semantic alignment (ASA) strategy, which models inter-modal relationships through affinity matrices and optimizes their consistency through contrastive learning. Subsequently, our cross-quality similarity learning (CQSL) strategy enhances feature robustness across compression rates. Extensive experiments demonstrate that our method achieves superior performance across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection. Notably, our approach maintains high detection accuracy even when individual features degrade, while providing interpretable insights into feature relationships through explicit alignment.


Key Contributions

  • UMCL framework that synthesizes three complementary feature modalities (rPPG signals, temporal landmark dynamics, VLM semantic embeddings) from a single visual input, eliminating expensive multimodal data collection
  • Affinity-driven semantic alignment (ASA) strategy that models and optimizes inter-modal relationships via affinity matrices and contrastive learning
  • Cross-quality similarity learning (CQSL) strategy that enhances feature robustness across different video compression rates encountered in social media streaming

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel deepfake detection architecture — a forensic technique for detecting AI-manipulated/generated face video content. The framework addresses output integrity by verifying whether video content is authentic or AI-generated, specifically under varying compression conditions from social media platforms.


Details

Domains
visionmultimodal
Model Types
transformermultimodal
Threat Tags
inference_time
Datasets
FaceForensics++Celeb-DeepFakeForensics
Applications
deepfake detectionface manipulation detectionvideo forgery detection