Multimodal Models Meet Presentation Attack Detection on ID Documents
Marina Villanueva 1, Juan M. Espin 1, Juan E. Tapia 2
Published on arXiv
2603.29422
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
All three VLMs (PaliGemma, LLaVA, Qwen) struggle to accurately detect presentation attacks on ID documents across multiple prompt formulations
The integration of multimodal models into Presentation Attack Detection (PAD) for ID Documents represents a significant advancement in biometric security. Traditional PAD systems rely solely on visual features, which often fail to detect sophisticated spoofing attacks. This study explores the combination of visual and textual modalities by utilizing pre-trained multimodal models, such as Paligemma, Llava, and Qwen, to enhance the detection of presentation attacks on ID Documents. This approach merges deep visual embeddings with contextual metadata (e.g., document type, issuer, and date). However, experimental results indicate that these models struggle to accurately detect PAD on ID Documents.
Key Contributions
- First benchmark of three pre-trained VLMs (PaliGemma, LLaVA, Qwen) for ID document presentation attack detection
- Evaluation of seven prompt engineering strategies (single, multiple, with examples, task-oriented, recipe-style)
- Demonstrates that current VLMs fail to generalize on PAD for ID documents despite strong performance in other vision-language tasks
🛡️ Threat Analysis
The paper addresses presentation attack detection (PAD) — detecting spoofed/fake ID documents (printed, displayed, composite, synthetic) vs. bona fide documents at inference time. PAD is fundamentally about detecting adversarial inputs designed to evade biometric authentication systems. However, this is a BORDERLINE case: the paper is evaluating VLMs as detectors (a tool application), not proposing new attacks or defenses against adversarial examples. The primary contribution is benchmarking existing models on a specific detection task, not advancing ML security methodology.