defense 2025

Video Forgery Detection with Optical Flow Residuals and Spatial-Temporal Consistency

Xi Xue , Kunio Suzuki , Nabarun Goswami , Takuya Shintate

0 citations

α

Published on arXiv

2508.00397

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves state-of-the-art accuracy, AUC, and F1 scores across all ten diffusion-model benchmarks for both T2V and I2V detection tasks

Optical Flow Residual Dual-Branch Detector

Novel technique introduced


The rapid advancement of diffusion-based video generation models has led to increasingly realistic synthetic content, presenting new challenges for video forgery detection. Existing methods often struggle to capture fine-grained temporal inconsistencies, particularly in AI-generated videos with high visual fidelity and coherent motion. In this work, we propose a detection framework that leverages spatial-temporal consistency by combining RGB appearance features with optical flow residuals. The model adopts a dual-branch architecture, where one branch analyzes RGB frames to detect appearance-level artifacts, while the other processes flow residuals to reveal subtle motion anomalies caused by imperfect temporal synthesis. By integrating these complementary features, the proposed method effectively detects a wide range of forged videos. Extensive experiments on text-to-video and image-to-video tasks across ten diverse generative models demonstrate the robustness and strong generalization ability of the proposed approach.


Key Contributions

  • Dual-branch architecture combining RGB appearance features with optical flow residuals to jointly model spatial and temporal anomalies in AI-generated videos
  • Use of flow residuals (differences between consecutive flow maps) to amplify localized motion inconsistencies while suppressing global motion artefacts
  • Evaluation across ten diffusion-based generative models on both text-to-video and image-to-video tasks, demonstrating state-of-the-art generalisation

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel AI-generated video content detection framework targeting synthetic videos from diffusion-based generators (Pika, Sora, VideoCrafter) — squarely in the output integrity and content authenticity domain of ML09.


Details

Domains
vision
Model Types
diffusiontransformercnn
Threat Tags
inference_time
Datasets
Text-to-video datasets from Pika, Sora, VideoCrafter, AnimateDiff, Make-A-Video, Imagen Video (10 models total)
Applications
ai-generated video detectionvideo forgery detectionmultimedia forensics