Semantic Visual Anomaly Detection and Reasoning in AI-Generated Images
Chuangchuang Tan 1,2, Xiang Ming 2, Jinglu Wang 2, Renshuai Tao 1, Bin Li 3, Yunchao Wei 1, Yao Zhao 1, Yan Lu 2
Published on arXiv
2510.10231
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Models fine-tuned on AnomReason achieve consistent performance gains over strong vision-language baselines under the proposed SemAP and SemF1 semantic matching metrics.
AnomReason / AnomAgent
Novel technique introduced
The rapid advancement of AI-generated content (AIGC) has enabled the synthesis of visually convincing images; however, many such outputs exhibit subtle \textbf{semantic anomalies}, including unrealistic object configurations, violations of physical laws, or commonsense inconsistencies, which compromise the overall plausibility of the generated scenes. Detecting these semantic-level anomalies is essential for assessing the trustworthiness of AIGC media, especially in AIGC image analysis, explainable deepfake detection and semantic authenticity assessment. In this paper, we formalize \textbf{semantic anomaly detection and reasoning} for AIGC images and introduce \textbf{AnomReason}, a large-scale benchmark with structured annotations as quadruples \emph{(Name, Phenomenon, Reasoning, Severity)}. Annotations are produced by a modular multi-agent pipeline (\textbf{AnomAgent}) with lightweight human-in-the-loop verification, enabling scale while preserving quality. At construction time, AnomAgent processed approximately 4.17\,B GPT-4o tokens, providing scale evidence for the resulting structured annotations. We further show that models fine-tuned on AnomReason achieve consistent gains over strong vision-language baselines under our proposed semantic matching metric (\textit{SemAP} and \textit{SemF1}). Applications to {explainable deepfake detection} and {semantic reasonableness assessment of image generators} demonstrate practical utility. In summary, AnomReason and AnomAgent serve as a foundation for measuring and improving the semantic plausibility of AI-generated images. We will release code, metrics, data, and task-aligned models to support reproducible research on semantic authenticity and interpretable AIGC forensics.
Key Contributions
- AnomReason: a large-scale benchmark with structured quadruple annotations (Name, Phenomenon, Reasoning, Severity) for semantic anomaly detection in AI-generated images
- AnomAgent: a modular multi-agent annotation pipeline with human-in-the-loop verification that processed ~4.17B GPT-4o tokens at construction time
- Novel semantic matching evaluation metrics (SemAP and SemF1) and demonstrated application to explainable deepfake detection and image generator quality assessment
🛡️ Threat Analysis
Directly targets AI-generated content detection and deepfake detection by providing a large-scale benchmark (AnomReason) with structured annotations and novel evaluation metrics (SemAP, SemF1) for assessing semantic plausibility and authenticity of AIGC images — core ML09 content provenance and AI-generated content detection territory.