defense 2025

CINEMAE: Leveraging Frozen Masked Autoencoders for Cross-Generator AI Image Detection

Minsuk Jang , Hyunseo Jeong , Minseok Son , Changick Kim

0 citations · 48 references · arXiv (Cornell University)

α

Published on arXiv

2511.06325

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves 96.63% mean accuracy on GenImage and 93.96% on AIGCDetectBenchmark, outperforming SOTA detectors across all 8 unseen generators when trained only on Stable Diffusion v1.4

CINEMAE

Novel technique introduced


While context-based detectors have achieved strong generalization for AI-generated text by measuring distributional inconsistencies, image-based detectors still struggle with overfitting to generator-specific artifacts. We introduce CINEMAE, a novel paradigm for AIGC image detection that adapts the core principles of text detection methods to the visual domain. Our key insight is that Masked AutoEncoder (MAE), trained to reconstruct masked patches conditioned on visible context, naturally encodes semantic consistency expectations. We formalize this reconstruction process probabilistically, computing conditional Negative Log-Likelihood (NLL, p(masked | visible)) to quantify local semantic anomalies. By aggregating these patch-level statistics with global MAE features through learned fusion, CINEMAE achieves strong cross-generator generalization. Trained exclusively on Stable Diffusion v1.4, our method achieves over 95% accuracy on all eight unseen generators in the GenImage benchmark, substantially outperforming state-of-the-art detectors. This demonstrates that context-conditional reconstruction uncertainty provides a robust, transferable signal for AIGC detection.


Key Contributions

  • First work to use MAE's reconstruction discrepancy (conditional NLL of masked patches given visible context) as a per-patch contextual anomaly signal for AI-generated image detection
  • CINEMAE architecture that fuses decoder-derived anomaly scores with encoder-derived global semantic features without task-specific fine-tuning of the frozen MAE backbone
  • Strong cross-generator generalization: >95.9% accuracy on all 8 GenImage generators and >91% on 15 of 16 unseen generators in AIGCDetectBenchmark, robust under JPEG compression at QF=50

🛡️ Threat Analysis

Output Integrity Attack

Core contribution is a novel AI-generated image detection architecture — falls squarely under ML09 (AI-generated content detection / output integrity). The paper proposes a new forensic technique, not a domain application of existing methods.


Details

Domains
visiongenerative
Model Types
transformerdiffusiongan
Threat Tags
inference_time
Datasets
GenImageAIGCDetectBenchmark
Applications
ai-generated image detectiondeepfake detectionmedia authenticity verification