FUSE: Unifying Spectral and Semantic Cues for Robust AI-Generated Image Detection
Md. Zahid Hossain 1, Most. Sharmin Sultana Samu 2, Md. Kamrozzaman Bhuiyan 3, Farhad Uz Zaman 4, Md. Rakibul Islam 1
Published on arXiv
2512.21695
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves 91.36% mean accuracy on GenImage and state-of-the-art performance on the Chameleon benchmark with 94.96% mean Average Precision across generators.
FUSE
Novel technique introduced
The fast evolution of generative models has heightened the demand for reliable detection of AI-generated images. To tackle this challenge, we introduce FUSE, a hybrid system that combines spectral features extracted through Fast Fourier Transform with semantic features obtained from the CLIP's Vision encoder. The features are fused into a joint representation and trained progressively in two stages. Evaluations on GenImage, WildFake, DiTFake, GPT-ImgEval and Chameleon datasets demonstrate strong generalization across multiple generators. Our FUSE (Stage 1) model demonstrates state-of-the-art results on the Chameleon benchmark. It also attains 91.36% mean accuracy on the GenImage dataset, 88.71% accuracy across all tested generators, and a mean Average Precision of 94.96%. Stage 2 training further improves performance for most generators. Unlike existing methods, which often perform poorly on high-fidelity images in Chameleon, our approach maintains robustness across diverse generators. These findings highlight the benefits of integrating spectral and semantic features for generalized detection of images generated by AI.
Key Contributions
- FUSE: a two-stage training strategy that progressively fuses FFT-derived spectral features with CLIP ViT semantic features into a joint representation for AI-generated image detection
- State-of-the-art results on the Chameleon benchmark and strong generalization to unseen generators including Diffusion Transformer models (Flux, SD3, PixArt-XL) and GPT-4o
- Demonstrates that combining complementary spectral and semantic cues yields more robust detection than methods relying on either feature type alone
🛡️ Threat Analysis
The paper's primary contribution is a novel AI-generated image detection architecture, which falls squarely under output integrity and content provenance — detecting whether an image was synthesized by a generative model is a canonical ML09 task.