Fake-in-Facext: Towards Fine-Grained Explainable DeepFake Analysis
Lixiong Qin 1, Yang Zhang 1, Mei Wang 2, Jiani Hu 1, Weihong Deng 1, Weiran Xu 1
Published on arXiv
2510.20531
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
FiFa-MLLM achieves SOTA on existing XDFA benchmarks (DD-VQA, DFA-Bench) while using 0.94B fewer parameters than the GLaMM baseline and outperforming it on nearly all FiFa-11 tasks.
FiFa-MLLM
Novel technique introduced
The advancement of Multimodal Large Language Models (MLLMs) has bridged the gap between vision and language tasks, enabling the implementation of Explainable DeepFake Analysis (XDFA). However, current methods suffer from a lack of fine-grained awareness: the description of artifacts in data annotation is unreliable and coarse-grained, and the models fail to support the output of connections between textual forgery explanations and the visual evidence of artifacts, as well as the input of queries for arbitrary facial regions. As a result, their responses are not sufficiently grounded in Face Visual Context (Facext). To address this limitation, we propose the Fake-in-Facext (FiFa) framework, with contributions focusing on data annotation and model construction. We first define a Facial Image Concept Tree (FICT) to divide facial images into fine-grained regional concepts, thereby obtaining a more reliable data annotation pipeline, FiFa-Annotator, for forgery explanation. Based on this dedicated data annotation, we introduce a novel Artifact-Grounding Explanation (AGE) task, which generates textual forgery explanations interleaved with segmentation masks of manipulated artifacts. We propose a unified multi-task learning architecture, FiFa-MLLM, to simultaneously support abundant multimodal inputs and outputs for fine-grained Explainable DeepFake Analysis. With multiple auxiliary supervision tasks, FiFa-MLLM can outperform strong baselines on the AGE task and achieve SOTA performance on existing XDFA datasets. The code and data will be made open-source at https://github.com/lxq1000/Fake-in-Facext.
Key Contributions
- Facial Image Concept Tree (FICT) with 112 atomic concepts enabling fine-grained, reliable artifact annotation via the FiFa-Annotator pipeline
- Novel Artifact-Grounding Explanation (AGE) task that interleaves natural language forgery explanations with pixel-level artifact segmentation masks
- Unified FiFa-MLLM architecture supporting 11 explainable deepfake analysis tasks with 0.94B fewer parameters than GLaMM while achieving SOTA on existing XDFA benchmarks
🛡️ Threat Analysis
Core contribution is deepfake/AI-generated face detection and forensic explanation — a novel detection architecture (FiFa-MLLM) that identifies and localizes manipulated facial artifacts, directly targeting AI-generated content integrity and authenticity.