defense 2025

Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization

Ashutosh Anshul , Shreyas Gopal , Deepu Rajan , Eng Siong Chng

0 citations · 76 references · arXiv

α

Published on arXiv

2511.10212

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Single-stage trained model achieves strong generalization across unseen manipulations and datasets while enabling precise temporal localization of deepfake segments in partially spoofed audio-visual content.

Next-Frame Feature Prediction (NFFP)

Novel technique introduced


Recent multimodal deepfake detection methods designed for generalization conjecture that single-stage supervised training struggles to generalize across unseen manipulations and datasets. However, such approaches that target generalization require pretraining over real samples. Additionally, these methods primarily focus on detecting audio-visual inconsistencies and may overlook intra-modal artifacts causing them to fail against manipulations that preserve audio-visual alignment. To address these limitations, we propose a single-stage training framework that enhances generalization by incorporating next-frame prediction for both uni-modal and cross-modal features. Additionally, we introduce a window-level attention mechanism to capture discrepancies between predicted and actual frames, enabling the model to detect local artifacts around every frame, which is crucial for accurately classifying fully manipulated videos and effectively localizing deepfake segments in partially spoofed samples. Our model, evaluated on multiple benchmark datasets, demonstrates strong generalization and precise temporal localization.


Key Contributions

  • Single-stage training framework with next-frame feature prediction for both uni-modal and cross-modal features, improving generalization without requiring pretraining on real samples
  • Window-level attention mechanism that captures discrepancies between predicted and actual frames to detect local per-frame artifacts
  • Unified model that handles both fully manipulated video classification and temporal localization of deepfake segments in partially spoofed samples

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel detection architecture for AI-manipulated audio-visual content (deepfakes), including both binary detection and temporal localization of spoofed segments — squarely within ML09's scope of AI-generated content detection and output integrity.


Details

Domains
visionaudiomultimodal
Model Types
transformermultimodal
Threat Tags
inference_timedigital
Applications
multimodal deepfake detectionaudio-visual manipulation detectiontemporal deepfake localization