benchmark 2026

Human-AI Ensembles Improve Deepfake Detection in Low-to-Medium Quality Videos

Marco Postiglione , Isabel Gortner , V.S. Subrahmanian

0 citations

α

Published on arXiv

2603.14658

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Human accuracy (0.784) far exceeds AI performance (0.537, near chance) on mobile-quality deepfakes, while hybrid ensembles reduce errors


Deepfake detection is widely framed as a machine learning problem, yet how humans and AI detectors compare under realistic conditions remains poorly understood. We evaluate 200 human participants and 95 state-of-the-art AI detectors across two datasets: DF40, a standard benchmark, and CharadesDF, a novel dataset of videos of everyday activities. CharadesDF was recorded using mobile phones leading to low/moderate quality videos compared to the more professionally captured DF40. Humans outperform AI detectors on both datasets, with the gap widening in the case of CharadesDF where AI accuracy collapses to near chance (0.537) while humans maintain robust performance (0.784). Human and AI errors are complementary: humans miss high-quality deepfakes while AI detectors flag authentic videos as fake, and hybrid human-AI ensembles reduce high-confidence errors. These findings suggest that effective real-world deepfake detection, especially in non-professionally produced videos, requires human-AI collaboration rather than AI algorithms alone.


Key Contributions

  • CharadesDF dataset of mobile-quality deepfakes depicting everyday activities
  • Comprehensive comparison of 200 humans vs 95 AI detectors showing humans outperform AI on realistic videos
  • Demonstration that human-AI hybrid ensembles reduce high-confidence detection errors through complementary error patterns

🛡️ Threat Analysis

Output Integrity Attack

Paper evaluates deepfake detection capabilities across AI detectors and humans, addressing output integrity and content authenticity verification — core ML09 concerns about distinguishing authentic from AI-generated visual content.


Details

Domains
visionmultimodal
Model Types
cnntransformer
Threat Tags
inference_time
Datasets
DF40CharadesDF
Applications
deepfake detectionvideo authenticationsynthetic media detection