CIPHER: Counterfeit Image Pattern High-level Examination via Representation
Kyeonghun Kim 1, Youngung Han 1, Seoyoung Ju 1,2, Yeonju Jean 1, YooHyun Kim 1, Minseo Choi 1, SuYeon Lim 2, Kyungtae Park 1, Seungwoo Baek 1, Sieun Hyeon 1, Nam-Joon Kim 1,2, Hyuk-Jae Lee 2
Published on arXiv
2603.29356
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves 88% F1-score on CIFAKE dataset where baseline detectors achieve near-zero performance, and 74.33% average F1-score across nine state-of-the-art generative models
CIPHER
Novel technique introduced
The rapid progress of generative adversarial networks (GANs) and diffusion models has enabled the creation of synthetic faces that are increasingly difficult to distinguish from real images. This progress, however, has also amplified the risks of misinformation, fraud, and identity abuse, underscoring the urgent need for detectors that remain robust across diverse generative models. In this work, we introduce Counterfeit Image Pattern High-level Examination via Representation(CIPHER), a deepfake detection framework that systematically reuses and fine-tunes discriminators originally trained for image generation. By extracting scale-adaptive features from ProGAN discriminators and temporal-consistency features from diffusion models, CIPHER captures generation-agnostic artifacts that conventional detectors often overlook. Through extensive experiments across nine state-of-the-art generative models, CIPHER demonstrates superior cross-model detection performance, achieving up to 74.33% F1-score and outperforming existing ViT-based detectors by over 30% in F1-score on average. Notably, our approach maintains robust performance on challenging datasets where baseline methods fail, with up to 88% F1-score on CIFAKE compared to near-zero performance from conventional detectors. These results validate the effectiveness of discriminator reuse and cross-model fine-tuning, establishing CIPHER as a promising approach toward building more generalizable and robust deepfake detection systems in an era of rapidly evolving generative technologies.
Key Contributions
- Reuses and fine-tunes ProGAN discriminators and diffusion model components for cross-model deepfake detection
- Extracts scale-adaptive features from GANs and temporal-consistency features from diffusion models
- Achieves 74.33% F1-score across nine generative models, outperforming ViT baselines by 30%
🛡️ Threat Analysis
Detects AI-generated images (deepfakes) to verify content authenticity and provenance — this is output integrity verification, distinguishing synthetic from real images.