defense 2025

Rethinking the Use of Vision Transformers for AI-Generated Image Detection

NaHyeon Park 1, Kunhee Kim 1, Junsuk Choe 2, Hyunjung Shim 1

1 citations · 1 influential · 64 references · arXiv

α

Published on arXiv

2512.04969

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

MoLD significantly outperforms final-layer-only baselines in detecting both GAN- and diffusion-generated images, with improved generalization across diverse generative models and real-world robustness

MoLD

Novel technique introduced


Rich feature representations derived from CLIP-ViT have been widely utilized in AI-generated image detection. While most existing methods primarily leverage features from the final layer, we systematically analyze the contributions of layer-wise features to this task. Our study reveals that earlier layers provide more localized and generalizable features, often surpassing the performance of final-layer features in detection tasks. Moreover, we find that different layers capture distinct aspects of the data, each contributing uniquely to AI-generated image detection. Motivated by these findings, we introduce a novel adaptive method, termed MoLD, which dynamically integrates features from multiple ViT layers using a gating-based mechanism. Extensive experiments on both GAN- and diffusion-generated images demonstrate that MoLD significantly improves detection performance, enhances generalization across diverse generative models, and exhibits robustness in real-world scenarios. Finally, we illustrate the scalability and versatility of our approach by successfully applying it to other pre-trained ViTs, such as DINOv2.


Key Contributions

  • Systematic layer-wise analysis of CLIP-ViT features showing earlier layers provide more localizable and generalizable representations for AI-generated image detection
  • MoLD: a gating-based adaptive mechanism that dynamically fuses features across multiple ViT layers for improved detection performance
  • Demonstrated scalability of MoLD to other pre-trained ViTs (DINOv2) and robustness across diverse GAN and diffusion model generators

🛡️ Threat Analysis

Output Integrity Attack

MoLD is a novel detection architecture for identifying AI-generated images (GAN- and diffusion-generated), directly addressing output integrity and content authenticity. The paper's primary contribution is a new forensic detection method, qualifying as ML09 under AI-generated content detection.


Details

Domains
visiongenerative
Model Types
transformergandiffusion
Threat Tags
inference_time
Applications
ai-generated image detectiondeepfake detection