Beyond Static Artifacts: A Forensic Benchmark for Video Deepfake Reasoning in Vision Language Models
Zheyuan Gu 1,2, Qingsong Zhao 1,3, Yusong Wang 1, Zhaohong Huang 1, Xinqi Li 2, Cheng Yuan 1, Jiaowei Shao 1, Chi Zhang 1, Xuelong Li 1
Published on arXiv
2602.21779
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
VLMs fine-tuned on FAQ-IT achieve state-of-the-art deepfake detection on both in-domain and cross-dataset benchmarks, confirming temporal reasoning as the key capability driver.
FAQ (Forensic Answer-Questioning)
Novel technique introduced
Current Vision-Language Models (VLMs) for deepfake detection excel at identifying spatial artifacts but overlook a critical dimension: temporal inconsistencies in video forgeries. Adapting VLMs to reason about these dynamic cues remains a distinct challenge. To bridge this gap, we propose Forensic Answer-Questioning (FAQ), a large-scale benchmark that formulates temporal deepfake analysis as a multiple-choice task. FAQ introduces a three-level hierarchy to progressively evaluate and equip VLMs with forensic capabilities: (1) Facial Perception, testing the ability to identify static visual artifacts; (2) Temporal Deepfake Grounding, requiring the localization of dynamic forgery artifacts across frames; and (3) Forensic Reasoning, challenging models to synthesize evidence for final authenticity verdicts. We evaluate a range of VLMs on FAQ and generate a corresponding instruction-tuning set, FAQ-IT. Extensive experiments show that models fine-tuned on FAQ-IT achieve advanced performance on both in-domain and cross-dataset detection benchmarks. Ablation studies further validate the impact of our key design choices, confirming that FAQ is the driving force behind the temporal reasoning capabilities of these VLMs.
Key Contributions
- FAQ benchmark: a large-scale multiple-choice evaluation framework for video deepfake analysis with a three-level hierarchy (Facial Perception → Temporal Deepfake Grounding → Forensic Reasoning)
- FAQ-IT instruction-tuning dataset derived from FAQ that improves VLM deepfake detection on both in-domain and cross-dataset benchmarks
- Systematic evaluation of existing VLMs on temporal deepfake reasoning, exposing their failure to capture dynamic forgery cues across frames
🛡️ Threat Analysis
Directly addresses AI-generated content detection — specifically video deepfake detection. The benchmark evaluates VLMs' ability to detect spatiotemporal forgery artifacts in synthetic/manipulated video content, and the FAQ-IT tuning set improves deepfake detection performance, all squarely within output integrity and content authenticity.