benchmark 2025

Through the Lens: Benchmarking Deepfake Detectors Against Moiré-Induced Distortions

Razaib Tariq 1, Minji Heo 1, Simon S. Woo 1, Shahroz Tariq 2

0 citations · 83 references · arXiv

α

Published on arXiv

2510.23225

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Moiré artifacts cause up to 25.4% accuracy degradation across 15 SOTA deepfake detectors, and demoiréing countermeasures unexpectedly worsen detection by up to 17.2%.

DeepMoiréFake (DMF)

Novel technique introduced


Deepfake detection remains a pressing challenge, particularly in real-world settings where smartphone-captured media from digital screens often introduces Moiré artifacts that can distort detection outcomes. This study systematically evaluates state-of-the-art (SOTA) deepfake detectors on Moiré-affected videos, an issue that has received little attention. We collected a dataset of 12,832 videos, spanning 35.64 hours, from the Celeb-DF, DFD, DFDC, UADFV, and FF++ datasets, capturing footage under diverse real-world conditions, including varying screens, smartphones, lighting setups, and camera angles. To further examine the influence of Moiré patterns on deepfake detection, we conducted additional experiments using our DeepMoiréFake, referred to as (DMF) dataset and two synthetic Moiré generation techniques. Across 15 top-performing detectors, our results show that Moiré artifacts degrade performance by as much as 25.4%, while synthetically generated Moiré patterns lead to a 21.4% drop in accuracy. Surprisingly, demoiréing methods, intended as a mitigation approach, instead worsened the problem, reducing accuracy by up to 17.2%. These findings underscore the urgent need for detection models that can robustly handle Moiré distortions alongside other realworld challenges, such as compression, sharpening, and blurring. By introducing the DMF dataset, we aim to drive future research toward closing the gap between controlled experiments and practical deepfake detection.


Key Contributions

  • DeepMoiréFake (DMF) dataset of 12,832 Moiré-affected deepfake videos (35.64 hours) captured under diverse real-world conditions
  • Systematic evaluation of 15 SOTA deepfake detectors showing Moiré artifacts degrade accuracy by up to 25.4% and synthetic Moiré by 21.4%
  • Counter-intuitive finding that demoiréing pre-processing, intended as a mitigation, further reduces detection accuracy by up to 17.2%

🛡️ Threat Analysis

Output Integrity Attack

Core subject is deepfake (AI-generated video) detection — a canonical ML09 output integrity / content authenticity topic. The paper evaluates robustness of AI-generated content detectors to real-world Moiré distortions and introduces the DMF benchmark dataset to advance this field.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_timedigital
Datasets
Celeb-DFDFDDFDCUADFVFaceForensics++DeepMoiréFake (DMF)
Applications
deepfake detectionvideo forensics