defense 2025

Seeing Before Reasoning: A Unified Framework for Generalizable and Explainable Fake Image Detection

Kaiqing Lin 1,2, Zhiyuan Yan 2,3, Ruoxin Chen 2, Junyan Ye 4, Keyue Zhang 2, Yue Zhou 1, Peng Jin 3, Bin Li 1, Taiping Yao 2, Shouhong Ding 2

9 citations · 62 references · arXiv

α

Published on arXiv

2509.25502

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Forensic-Chat achieves superior generalization over naive MLLM fine-tuning baselines and produces genuinely reliable explainability grounded in perceived visual artifacts rather than learned linguistic shortcuts.

Forensic-Chat

Novel technique introduced


Detecting AI-generated images with multimodal large language models (MLLMs) has gained increasing attention, due to their rich world knowledge, common-sense reasoning, and potential for explainability. However, naively applying those MLLMs for detection often leads to suboptimal performance. We argue that the root of this failure lies in a fundamental mismatch: MLLMs are asked to reason about fakes before they can truly see them. First, they do not really see: existing MLLMs' vision encoders are primarily optimized for semantic-oriented recognition rather than the perception of low-level signals, leaving them insensitive to subtle forgery traces. Without access to reliable perceptual evidence, the model grounds its judgment on incomplete and limited visual observations. Second, existing finetuning data for detection typically uses narrow, instruction-style formats, which diverge sharply from the diverse, heterogeneous distributions seen in pretraining. In the absence of meaningful visual cues, the model therefore exploits these linguistic shortcuts, resulting in catastrophic forgetting of pretrained knowledge (even the basic dialogue capabilities). In response, we advocate for a new paradigm: seeing before reasoning. We propose that MLLMs should first be trained to perceive artifacts-strengthening their artifact-aware visual perception-so that subsequent reasoning is grounded in actual observations. We therefore propose Forensic-Chat, a generalizable, explainable, and still-conversational (for multi-round dialogue) assistant for fake image detection. We also propose ExplainFake-Bench, a benchmark tailored for the evaluation of the MLLM's explainability for image forensics from five key aspects. Extensive experiments show its superiority of generalization and genuinely reliable explainability.


Key Contributions

  • Forensic-Chat: a two-stage training paradigm that first refines the vision encoder via self-reconstruction to perceive forgery artifacts, then fine-tunes with multi-round dialogue data for dialectical reasoning about why an image is fake
  • ExplainFake-Bench: a benchmark evaluating MLLM explainability for image forensics across five key aspects, designed to test whether explanations reflect genuine visual observations rather than linguistic shortcuts
  • Identification of the 'seeing before reasoning' failure mode: MLLMs shortcut to linguistic templates when vision encoders lack low-level artifact sensitivity, causing catastrophic forgetting of pretrained knowledge

🛡️ Threat Analysis

Output Integrity Attack

The paper's primary contribution is detecting AI-generated images (a canonical ML09 output integrity / content provenance task). It proposes a novel detection architecture (Forensic-Chat) with a new training paradigm and an evaluation benchmark (ExplainFake-Bench) for assessing MLLM explainability in image forensics — not merely applying existing detectors to a new domain.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
inference_timedigital
Datasets
ExplainFake-Bench
Applications
ai-generated image detectionimage forensicsdeepfake detection