benchmark arXiv Mar 31, 2026 · 8d ago
Marina Villanueva, Juan M. Espin, Juan E. Tapia · Facephi · Hochschule Darmstadt
Evaluates three vision-language models for detecting ID document presentation attacks using seven prompt types, finding poor generalization
Input Manipulation Attack visionmultimodal
The integration of multimodal models into Presentation Attack Detection (PAD) for ID Documents represents a significant advancement in biometric security. Traditional PAD systems rely solely on visual features, which often fail to detect sophisticated spoofing attacks. This study explores the combination of visual and textual modalities by utilizing pre-trained multimodal models, such as Paligemma, Llava, and Qwen, to enhance the detection of presentation attacks on ID Documents. This approach merges deep visual embeddings with contextual metadata (e.g., document type, issuer, and date). However, experimental results indicate that these models struggle to accurately detect PAD on ID Documents.
vlm multimodal transformer Facephi · Hochschule Darmstadt