AlignGemini: Generalizable AI-Generated Image Detection Through Task-Model Alignment
Ruoxin Chen 1, Jiahui Gao 2, Kaiqing Lin 3, Keyue Zhang 1, Yandan Zhao 1, Isabel Guan 4, Taiping Yao 1, Shouhong Ding 1
Published on arXiv
2512.06746
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
AlignGemini improves average accuracy by 9.5% on in-the-wild AIGI benchmarks compared to existing VLM-based detectors while using a substantially simpler training corpus.
AlignGemini
Novel technique introduced
Vision Language Models (VLMs) are increasingly used for detecting AI-generated images (AIGI). However, converting VLMs into reliable detectors is resource-intensive, and the resulting models often suffer from hallucination and poor generalization. To investigate the root cause, we conduct an empirical analysis and identify two consistent behaviors. First, fine-tuning VLMs with semantic supervision improves semantic discrimination and generalizes well to unseen data. Second, fine-tuning VLMs with pixel-artifact supervision leads to weak generalization. These findings reveal a fundamental task-model misalignment. VLMs are optimized for high-level semantic reasoning and lack inductive bias toward low-level pixel artifacts. In contrast, conventional vision models effectively capture pixel-level artifacts but are less sensitive to semantic inconsistencies. This indicates that different models are naturally suited to different subtasks. Based on this insight, we formulate AIGI detection as two orthogonal subtasks: semantic consistency checking and pixel-artifact detection. Neglecting either subtask leads to systematic detection failures. We further propose the Task-Model Alignment principle and instantiate it in a two-branch detector, AlignGemini. The detector combines a VLM trained with pure semantic supervision and a vision model trained with pure pixel-artifact supervision. By enforcing clear specialization, each branch captures complementary cues. Experiments on in-the-wild benchmarks show that AlignGemini improves average accuracy by 9.5 percent using simplified training data. These results demonstrate that task-model alignment is an effective principle for generalizable AIGI detection.
Key Contributions
- Empirical analysis revealing that VLMs generalize well on semantic AIGI detection but suffer hallucination and poor generalization when trained on pixel-artifact supervision, identifying a fundamental task-model misalignment.
- Task-Model Alignment principle: formalizes AIGI detection as two orthogonal subtasks (semantic consistency checking and pixel-artifact detection) and matches each to the model architecture best suited for it.
- AlignGemini: a two-branch detector combining a VLM trained with pure semantic supervision and a conventional vision model trained with pure artifact supervision, achieving +9.5% average accuracy on in-the-wild benchmarks with simpler training data.
🛡️ Threat Analysis
Directly addresses AI-generated image detection (AIGI), proposing a novel two-branch detector architecture (AlignGemini) that identifies synthetic content — core output integrity and content authenticity concern under ML09.