tool 2025

Bridging the Gap: A Framework for Real-World Video Deepfake Detection via Social Network Compression Emulation

Andrea Montibeller 1,2, Dasara Shullani 3, Daniele Baracchi 3, Alessandro Piva 3, Giulia Boato 1,2

0 citations

α

Published on arXiv

2508.08765

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Deepfake detectors fine-tuned on locally emulated social network compressed videos achieve performance comparable to those trained on actual platform-shared media, demonstrating the emulator's fidelity.


The growing presence of AI-generated videos on social networks poses new challenges for deepfake detection, as detectors trained under controlled conditions often fail to generalize to real-world scenarios. A key factor behind this gap is the aggressive, proprietary compression applied by platforms like YouTube and Facebook, which launder low-level forensic cues. However, replicating these transformations at scale is difficult due to API limitations and data-sharing constraints. For these reasons, we propose a first framework that emulates the video sharing pipelines of social networks by estimating compression and resizing parameters from a small set of uploaded videos. These parameters enable a local emulator capable of reproducing platform-specific artifacts on large datasets without direct API access. Experiments on FaceForensics++ videos shared via social networks demonstrate that our emulated data closely matches the degradation patterns of real uploads. Furthermore, detectors fine-tuned on emulated videos achieve comparable performance to those trained on actual shared media. Our approach offers a scalable and practical solution for bridging the gap between lab-based training and real-world deployment of deepfake detectors, particularly in the underexplored domain of compressed video content.


Key Contributions

  • A framework that estimates compression and resizing parameters from fewer than 50 uploaded videos per resolution on a target social network platform
  • A local emulator that reproduces platform-specific compression artifacts on large datasets without direct API access, enabling scalable training data generation
  • Empirical validation showing detectors fine-tuned on emulated FaceForensics++ videos achieve comparable performance to those trained on actual social-network-shared media

🛡️ Threat Analysis

Output Integrity Attack

The paper's primary contribution is improving AI-generated video (deepfake) detection under real-world social network compression — a direct contribution to output integrity and AI-generated content detection. It proposes a novel forensic framework for emulating platform-specific compression artifacts to make deepfake detectors generalize to real-world deployed conditions.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_time
Datasets
FaceForensics++
Applications
video deepfake detectionsocial network media forensics