defense 2025

MiraGe: Multimodal Discriminative Representation Learning for Generalizable AI-Generated Image Detection

Kuo Shi , Jie Lu , Shanshan Ye , Guangquan Zhang , Zhen Fang

0 citations

α

Published on arXiv

2508.01525

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

MiraGe achieves state-of-the-art detection accuracy on eight GenImage subsets when trained only on Stable Diffusion v1.4 images, outperforming all baselines including CLIPping and UnivFD on unseen generators.

MiraGe

Novel technique introduced


Recent advances in generative models have highlighted the need for robust detectors capable of distinguishing real images from AI-generated images. While existing methods perform well on known generators, their performance often declines when tested with newly emerging or unseen generative models due to overlapping feature embeddings that hinder accurate cross-generator classification. In this paper, we propose Multimodal Discriminative Representation Learning for Generalizable AI-generated Image Detection (MiraGe), a method designed to learn generator-invariant features. Motivated by theoretical insights on intra-class variation minimization and inter-class separation, MiraGe tightly aligns features within the same class while maximizing separation between classes, enhancing feature discriminability. Moreover, we apply multimodal prompt learning to further refine these principles into CLIP, leveraging text embeddings as semantic anchors for effective discriminative representation learning, thereby improving generalizability. Comprehensive experiments across multiple benchmarks show that MiraGe achieves state-of-the-art performance, maintaining robustness even against unseen generators like Sora.


Key Contributions

  • Discriminative representation learning framework that minimizes intra-class variation and maximizes inter-class separation for generator-invariant feature learning
  • Multimodal prompt learning on CLIP that uses text embeddings as semantic anchors to separate real and AI-generated image feature distributions
  • State-of-the-art cross-generator generalization on GenImage benchmark, including robustness against unseen generators like Sora and BigGAN

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated image detection — a core ML09 concern around output integrity and content provenance. MiraGe proposes a novel detection architecture using multimodal discriminative representation learning to distinguish real from AI-generated images across unseen generators.


Details

Domains
visionmultimodal
Model Types
vlmdiffusiontransformergan
Threat Tags
inference_timeblack_box
Datasets
GenImageMSCOCO
Applications
ai-generated image detectiondeepfake detectioncontent authenticity verification