ThinkFake: Reasoning in Multimodal Large Language Models for AI-Generated Image Detection
Tai-Ming Huang 1,2, Wei-Tung Lin 2,3, Kai-Lung Hua 4,3, Wen-Huang Cheng 1, Junichi Yamagishi 5, Jun-Cheng Chen 2
Published on arXiv
2509.19841
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
ThinkFake outperforms state-of-the-art methods on GenImage and demonstrates strong zero-shot generalization on the LOKI benchmark using GRPO-trained MLLM reasoning.
ThinkFake
Novel technique introduced
The increasing realism of AI-generated images has raised serious concerns about misinformation and privacy violations, highlighting the urgent need for accurate and interpretable detection methods. While existing approaches have made progress, most rely on binary classification without explanations or depend heavily on supervised fine-tuning, resulting in limited generalization. In this paper, we propose ThinkFake, a novel reasoning-based and generalizable framework for AI-generated image detection. Our method leverages a Multimodal Large Language Model (MLLM) equipped with a forgery reasoning prompt and is trained using Group Relative Policy Optimization (GRPO) reinforcement learning with carefully designed reward functions. This design enables the model to perform step-by-step reasoning and produce interpretable, structured outputs. We further introduce a structured detection pipeline to enhance reasoning quality and adaptability. Extensive experiments show that ThinkFake outperforms state-of-the-art methods on the GenImage benchmark and demonstrates strong zero-shot generalization on the challenging LOKI benchmark. These results validate our framework's effectiveness and robustness. Code will be released upon acceptance.
Key Contributions
- ThinkFake: an MLLM-based framework using forgery reasoning prompts that produces interpretable, step-by-step structured outputs for AI-generated image detection
- GRPO reinforcement learning with carefully designed reward functions to train reasoning quality without heavy supervised fine-tuning dependence
- Structured detection pipeline enabling strong zero-shot generalization, demonstrated on the LOKI benchmark
🛡️ Threat Analysis
Primary contribution is a novel AI-generated image detection method — proposing a new forensic architecture (forgery reasoning prompts + GRPO RL training) to authenticate model output content provenance, which falls squarely under output integrity and AI-generated content detection.