benchmark arXiv Jan 20, 2026 · 10w ago
Feng Ding, Wenhui Yi, Xinan He et al. · NanChang University
Introduces 2M+ diffusion-edited face dataset with 8 facial regions and first analysis of detector-evasive splice samples
Output Integrity Attack visiongenerative
Generative models now produce imperceptible, fine-grained manipulated faces, posing significant privacy risks. However, existing AI-generated face datasets generally lack focus on samples with fine-grained regional manipulations. Furthermore, no researchers have yet studied the real impact of splice attacks, which occur between real and manipulated samples, on detectors. We refer to these as detector-evasive samples. Based on this, we introduce the DiffFace-Edit dataset, which has the following advantages: 1) It contains over two million AI-generated fake images. 2) It features edits across eight facial regions (e.g., eyes, nose) and includes a richer variety of editing combinations, such as single-region and multi-region edits. Additionally, we specifically analyze the impact of detector-evasive samples on detection models. We conduct a comprehensive analysis of the dataset and propose a cross-domain evaluation that combines IMDL methods. Dataset will be available at https://github.com/ywh1093/DiffFace-Edit.
diffusion NanChang University
defense arXiv Nov 13, 2025 · Nov 2025
Feng Ding, Wenhui Yi, Yunpeng Zhou et al. · NanChang University · Shenzhen University +1 more
Fairness-aware deepfake detector using channel decoupling and distribution alignment to reduce demographic bias without sacrificing accuracy
Output Integrity Attack vision
Fairness is a core element in the trustworthy deployment of deepfake detection models, especially in the field of digital identity security. Biases in detection models toward different demographic groups, such as gender and race, may lead to systemic misjudgments, exacerbating the digital divide and social inequities. However, current fairness-enhanced detectors often improve fairness at the cost of detection accuracy. To address this challenge, we propose a dual-mechanism collaborative optimization framework. Our proposed method innovatively integrates structural fairness decoupling and global distribution alignment: decoupling channels sensitive to demographic groups at the model architectural level, and subsequently reducing the distance between the overall sample distribution and the distributions corresponding to each demographic group at the feature level. Experimental results demonstrate that, compared with other methods, our framework improves both inter-group and intra-group fairness while maintaining overall detection accuracy across domains.
cnn transformer NanChang University · Shenzhen University · Purdue University