benchmark 2025

FakeParts: a New Family of AI-Generated DeepFakes

Ziyi Liu 1, Firas Gabetni 2, Awais Hussain Sani 1, Xi Wang 2, Soobash Daiboo 1, Gaetan Brison 2, Gianni Franchi 2, Vicky Kalogeiton 2

0 citations

α

Published on arXiv

2508.21052

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

FakeParts reduces human detection accuracy by up to 26% and causes similar degradation in state-of-the-art deepfake detection models compared to traditional full deepfakes.

FakePartsBench

Novel technique introduced


We introduce FakeParts, a new class of deepfakes characterized by subtle, localized manipulations to specific spatial regions or temporal segments of otherwise authentic videos. Unlike fully synthetic content, these partial manipulations - ranging from altered facial expressions to object substitutions and background modifications - blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address the critical gap in detection, we present FakePartsBench, the first large-scale benchmark specifically designed to capture the full spectrum of partial deepfakes. Comprising over 81K (including 44K FakeParts) videos with pixel- and frame-level manipulation annotations, our dataset enables comprehensive evaluation of detection methods. Our user studies demonstrate that FakeParts reduces human detection accuracy by up to 26% compared to traditional deepfakes, with similar performance degradation observed in state-of-the-art detection models. This work identifies an urgent vulnerability in current detectors and provides the necessary resources to develop methods robust to partial manipulations.


Key Contributions

  • Defines FakeParts — a new class of deepfakes with subtle, localized spatial or temporal manipulations that evade both human and automated detection
  • Introduces FakePartsBench, the first large-scale benchmark with 81K videos (44K FakeParts) annotated at pixel- and frame-level for partial deepfake detection
  • User studies demonstrating FakeParts reduces human detection accuracy by 26% compared to full deepfakes, with comparable degradation in SOTA detectors

🛡️ Threat Analysis

Output Integrity Attack

FakeParts are a novel class of AI-generated/manipulated video content (partial deepfakes), and the paper's primary contribution is FakePartsBench — a benchmark for evaluating AI-generated content detection methods. Current detectors degrade significantly on this new type, constituting an output integrity threat.


Details

Domains
visiongenerative
Model Types
diffusiongan
Threat Tags
inference_timedigital
Datasets
FakePartsBench
Applications
deepfake detectionvideo forensicsmedia integrity verification