attack 2025

Counterfeit Answers: Adversarial Forgery against OCR-Free Document Visual Question Answering

Marco Pintore 1, Maura Pintor 1, Dimosthenis Karatzas 2, Battista Biggio 1,3

1 citations · 23 references · arXiv

α

Published on arXiv

2512.04554

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Visually imperceptible adversarial perturbations successfully manipulate OCR-free DocVQA models (Pix2Struct, Donut) into producing attacker-specified incorrect answers, including targeted financial misinformation scenarios.

Adversarial Document Forgery

Novel technique introduced


Document Visual Question Answering (DocVQA) enables end-to-end reasoning grounded on information present in a document input. While recent models have shown impressive capabilities, they remain vulnerable to adversarial attacks. In this work, we introduce a novel attack scenario that aims to forge document content in a visually imperceptible yet semantically targeted manner, allowing an adversary to induce specific or generally incorrect answers from a DocVQA model. We develop specialized attack algorithms that can produce adversarially forged documents tailored to different attackers' goals, ranging from targeted misinformation to systematic model failure scenarios. We demonstrate the effectiveness of our approach against two end-to-end state-of-the-art models: Pix2Struct, a vision-language transformer that jointly processes image and text through sequence-to-sequence modeling, and Donut, a transformer-based model that directly extracts text and answers questions from document images. Our findings highlight critical vulnerabilities in current DocVQA systems and call for the development of more robust defenses.


Key Contributions

  • First formal threat model for adversarial robustness of OCR-free DocVQA systems
  • Specialized attack algorithms for producing adversarially forged documents targeting both specific misinformation (targeted) and systematic model failure (untargeted) goals
  • Empirical evaluation against Pix2Struct and Donut demonstrating critical vulnerabilities in current DocVQA architectures

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is gradient-based adversarial perturbation of document images at inference time to cause targeted and untargeted misclassification in DocVQA models — a classic Input Manipulation Attack.


Details

Domains
visionnlpmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxinference_timetargeteduntargeteddigital
Datasets
DocVQA
Applications
document visual question answeringautomated document processinginvoice/form understanding