LAA-X: Unified Localized Artifact Attention for Quality-Agnostic and Generalizable Face Forgery Detection
Dat Nguyen 1, Enjie Ghorbel 2,1, Anis Kacem 1, Marcella Astrid 1, Djamila Aouada 1
Published on arXiv
2604.04086
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves state-of-the-art generalization to unseen manipulations despite training only on real and pseudo-fake samples
LAA-X (Localized Artifact Attention X)
Novel technique introduced
In this paper, we propose Localized Artifact Attention X (LAA-X), a novel deepfake detection framework that is both robust to high-quality forgeries and capable of generalizing to unseen manipulations. Existing approaches typically rely on binary classifiers coupled with implicit attention mechanisms, which often fail to generalize beyond known manipulations. In contrast, LAA-X introduces an explicit attention strategy based on a multi-task learning framework combined with blending-based data synthesis. Auxiliary tasks are designed to guide the model toward localized, artifact-prone (i.e., vulnerable) regions. The proposed framework is compatible with both CNN and transformer backbones, resulting in two different versions, namely, LAA-Net and LAA-Former, respectively. Despite being trained only on real and pseudo-fake samples, LAA-X competes with state-of-the-art methods across multiple benchmarks. Code and pre-trained weights for LAA-Net\footnote{https://github.com/10Ring/LAA-Net} and LAA-Former\footnote{https://github.com/10Ring/LAA-Former} are publicly available.
Key Contributions
- Explicit localized artifact attention mechanism for generalizable deepfake detection
- Multi-task learning framework with blending-based data synthesis
- Compatible with both CNN (LAA-Net) and transformer (LAA-Former) backbones
🛡️ Threat Analysis
Detects AI-generated deepfake faces — directly addresses output integrity and content authenticity verification.