defense 2025

Consolidating Diffusion-Generated Video Detection with Unified Multimodal Forgery Learning

Xiaohong Liu 1,2, Xiufeng Song 1, Huayu Zheng 1, Lei Bai 3, Xiaoming Liu 4, Guangtao Zhai 1,2

0 citations · 90 references · arXiv

α

Published on arXiv

2511.18104

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

MM-Det++ outperforms existing methods on diffusion-generated video detection by unifying spatio-temporal and multimodal MLLM-based forgery representations through the UML module.

MM-Det++

Novel technique introduced


The proliferation of videos generated by diffusion models has raised increasing concerns about information security, highlighting the urgent need for reliable detection of synthetic media. Existing methods primarily focus on image-level forgery detection, leaving generic video-level forgery detection largely underexplored. To advance video forensics, we propose a consolidated multimodal detection algorithm, named MM-Det++, specifically designed for detecting diffusion-generated videos. Our approach consists of two innovative branches and a Unified Multimodal Learning (UML) module. Specifically, the Spatio-Temporal (ST) branch employs a novel Frame-Centric Vision Transformer (FC-ViT) to aggregate spatio-temporal information for detecting diffusion-generated videos, where the FC-tokens enable the capture of holistic forgery traces from each video frame. In parallel, the Multimodal (MM) branch adopts a learnable reasoning paradigm to acquire Multimodal Forgery Representation (MFR) by harnessing the powerful comprehension and reasoning capabilities of Multimodal Large Language Models (MLLMs), which discerns the forgery traces from a flexible semantic perspective. To integrate multimodal representations into a coherent space, a UML module is introduced to consolidate the generalization ability of MM-Det++. In addition, we also establish a large-scale and comprehensive Diffusion Video Forensics (DVF) dataset to advance research in video forgery detection. Extensive experiments demonstrate the superiority of MM-Det++ and highlight the effectiveness of unified multimodal forgery learning in detecting diffusion-generated videos.


Key Contributions

  • Frame-Centric Vision Transformer (FC-ViT) for aggregating spatio-temporal forgery traces across video frames
  • Multimodal branch leveraging MLLMs for semantic-level forgery reasoning via learnable Multimodal Forgery Representation (MFR)
  • Diffusion Video Forensics (DVF) large-scale benchmark dataset for diffusion-generated video detection research

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel forensic detection architecture specifically designed to identify diffusion-model-generated video content, directly addressing AI-generated content detection (output integrity). The paper contributes a novel detection method (MM-Det++) and a new dataset (DVF), rather than merely applying existing tools to a domain.


Details

Domains
visionmultimodal
Model Types
diffusiontransformervlm
Threat Tags
inference_time
Datasets
DVF (Diffusion Video Forensics)
Applications
video forgery detectionsynthetic video detectiondeepfake detection