Zoom-In to Sort AI-Generated Images Out
Yikun Ji 1, Yan Hong 2, Bowen Deng 2, Jun Lan 2, Huijia Zhu 1, Weiqiang Wang 1, Liqing Zhang 2, Jianfu Zhang 1
Published on arXiv
2510.04225
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
ZoomIn achieves 96.39% accuracy on AI-generated image detection with strong generalization across external datasets and interpretable, bounding-box-grounded forensic explanations.
ZoomIn
Novel technique introduced
The rapid growth of AI-generated imagery has blurred the boundary between real and synthetic content, raising critical concerns for digital integrity. Vision-language models (VLMs) offer interpretability through explanations but often fail to detect subtle artifacts in high-quality synthetic images. We propose ZoomIn, a two-stage forensic framework that improves both accuracy and interpretability. Mimicking human visual inspection, ZoomIn first scans an image to locate suspicious regions and then performs a focused analysis on these zoomed-in areas to deliver a grounded verdict. To support training, we introduce MagniFake, a dataset of 20,000 real and high-quality synthetic images annotated with bounding boxes and forensic explanations, generated through an automated VLM-based pipeline. Our method achieves 96.39% accuracy with robust generalization, while providing human-understandable explanations grounded in visual evidence.
Key Contributions
- ZoomIn: a two-stage forensic VLM framework that first locates suspicious regions via global scan, then zooms into those regions for a grounded detection verdict
- MagniFake: a dataset of 20,000 real and high-quality synthetic images annotated with bounding boxes and forensic explanations, constructed using an automated GPT-4o/Qwen-2.5-VL pipeline
- 96.39% detection accuracy with robust generalization to out-of-distribution generators and human-interpretable visual evidence grounding
🛡️ Threat Analysis
The paper's primary contribution is a novel AI-generated image detection framework (ZoomIn) that identifies synthetic content — a direct output integrity and content authenticity problem. It proposes a new detection architecture and a supporting dataset (MagniFake), not merely applying existing methods to a specific domain.