defense 2025

Leveraging Hierarchical Image-Text Misalignment for Universal Fake Image Detection

Daichi Zhang 1, Tong Zhang 1, Jianmin Bao 2, Shiming Ge 3, Sabine Süsstrunk 1

0 citations · 62 references · arXiv

α

Published on arXiv

2511.00427

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

ITEM achieves superior generalization and robustness to unseen generative models compared to state-of-the-art binary image classification detectors by exploiting multimodal misalignment cues.

ITEM (Image-Text misalignmEnt Method)

Novel technique introduced


With the rapid development of generative models, detecting generated fake images to prevent their malicious use has become a critical issue recently. Existing methods frame this challenge as a naive binary image classification task. However, such methods focus only on visual clues, yielding trained detectors susceptible to overfitting specific image patterns and incapable of generalizing to unseen models. In this paper, we address this issue from a multi-modal perspective and find that fake images cannot be properly aligned with corresponding captions compared to real images. Upon this observation, we propose a simple yet effective detector termed ITEM by leveraging the image-text misalignment in a joint visual-language space as discriminative clues. Specifically, we first measure the misalignment of the images and captions in pre-trained CLIP's space, and then tune a MLP head to perform the usual detection task. Furthermore, we propose a hierarchical misalignment scheme that first focuses on the whole image and then each semantic object described in the caption, which can explore both global and fine-grained local semantic misalignment as clues. Extensive experiments demonstrate the superiority of our method against other state-of-the-art competitors with impressive generalization and robustness on various recent generative models.


Key Contributions

  • Observes that AI-generated fake images exhibit measurable misalignment with corresponding text captions in CLIP's joint visual-language space, unlike real images.
  • Proposes ITEM, a detector that combines CLIP-space misalignment features with a lightweight MLP head for binary fake image classification.
  • Introduces a hierarchical misalignment scheme capturing both global image-level and fine-grained local semantic object-level misalignment for improved generalization across unseen generative models.

🛡️ Threat Analysis

Output Integrity Attack

Core contribution is a novel AI-generated image detector (ITEM) that identifies synthetic images via image-text misalignment clues in a joint visual-language space — directly addressing output integrity and content authenticity, with generalization to unseen generative models.


Details

Domains
visionmultimodal
Model Types
transformerdiffusiongan
Threat Tags
inference_timedigital
Applications
ai-generated image detectionfake image detectiondeepfake detection