VRAG-DFD: Verifiable Retrieval-Augmentation for MLLM-based Deepfake Detection
Hui Han 1,2, Shunli Wang 2, Yandan Zhao 2, Taiping Yao 2, Shouhong Ding 2
Published on arXiv
2604.13660
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves state-of-the-art performance on deepfake detection generalization testing using retrieval-augmented MLLM with critical reasoning
VRAG-DFD
Novel technique introduced
In Deepfake Detection (DFD) tasks, researchers proposed two types of MLLM-based methods: complementary combination with small DFD detectors, or static forgery knowledge injection.The lack of professional forgery knowledge hinders the performance of these DFD-MLLMs.To solve this, we deeply considered two insightful issues: How to provide high-quality associated forgery knowledge for MLLMs? AND How to endow MLLMs with critical reasoning abilities given noisy reference information? Notably, we attempted to address above two questions with preliminary answers by leveraging the combination of Retrieval-Augmented Generation (RAG) and Reinforcement Learning (RL).Through RAG and RL techniques, we propose the VRAG-DFD framework with accurate dynamic forgery knowledge retrieval and powerful critical reasoning capabilities.Specifically, in terms of data, we constructed two datasets with RAG: Forensic Knowledge Database (FKD) for DFD knowledge annotation, and Forensic Chain-of-Thought Dataset (F-CoT), for critical CoT construction.In terms of model training, we adopt a three-stage training method (Alignment->SFT->GRPO) to gradually cultivate the critical reasoning ability of the MLLM.In terms of performance, VRAG-DFD achieved SOTA and competitive performance on DFD generalization testing.
Key Contributions
- Forensic Knowledge Database (FKD) for DFD knowledge annotation via RAG
- Forensic Chain-of-Thought Dataset (F-CoT) for critical reasoning training
- Three-stage training pipeline (Alignment->SFT->GRPO) using reinforcement learning to enhance MLLM critical reasoning for deepfake detection
🛡️ Threat Analysis
Core contribution is detecting AI-generated/manipulated content (deepfakes) — this is output integrity and content authenticity verification, a primary ML09 use case.