defense 2025

ForenX: Towards Explainable AI-Generated Image Detection with Multimodal Large Language Models

Chuangchuang Tan 1,2, Jinglu Wang 2, Xiang Ming 2, Renshuai Tao 1, Yunchao Wei 1, Yao Zhao 1, Yan Lu 2

0 citations

α

Published on arXiv

2508.01402

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

ForenX with specialized forensic prompts improves both detection generalization and explanation quality over standard MLLMs, with limited manual annotations providing significant gains in explainability.

ForenX

Novel technique introduced


Advances in generative models have led to AI-generated images visually indistinguishable from authentic ones. Despite numerous studies on detecting AI-generated images with classifiers, a gap persists between such methods and human cognitive forensic analysis. We present ForenX, a novel method that not only identifies the authenticity of images but also provides explanations that resonate with human thoughts. ForenX employs the powerful multimodal large language models (MLLMs) to analyze and interpret forensic cues. Furthermore, we overcome the limitations of standard MLLMs in detecting forgeries by incorporating a specialized forensic prompt that directs the MLLMs attention to forgery-indicative attributes. This approach not only enhance the generalization of forgery detection but also empowers the MLLMs to provide explanations that are accurate, relevant, and comprehensive. Additionally, we introduce ForgReason, a dataset dedicated to descriptions of forgery evidences in AI-generated images. Curated through collaboration between an LLM-based agent and a team of human annotators, this process provides refined data that further enhances our model's performance. We demonstrate that even limited manual annotations significantly improve explanation quality. We evaluate the effectiveness of ForenX on two major benchmarks. The model's explainability is verified by comprehensive subjective evaluations.


Key Contributions

  • ForenX: a novel explainable AI-generated image detector that uses MLLMs with specialized forensic prompts to identify forgery-indicative attributes and produce human-aligned explanations
  • ForgReason: a new dataset of forgery evidence descriptions in AI-generated images, curated via LLM-based agents and human annotators
  • Empirical demonstration that even limited manual annotation significantly improves explanation quality in MLLM-based forensic analysis

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated image detection (output integrity/content authenticity) by proposing ForenX, a novel architecture leveraging MLLMs with forensic-specific prompting to detect and explain image forgeries — a core ML09 forensic detection contribution.


Details

Domains
visionmultimodalnlp
Model Types
vlmllm
Threat Tags
inference_time
Datasets
ForgReason
Applications
ai-generated image detectionimage forensicsdeepfake detection