benchmark 2026

GPT4o-Receipt: A Dataset and Human Study for AI-Generated Document Forensics

Yan Zhang , Simiao Ren , Ankit Raj , En Wei , Dennis Ng , Alex Shen , Jiayue Xu , Yuxin Zhang , Evelyn Marotta

0 citations

α

Published on arXiv

2603.11442

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Human annotators show the largest visual realism discrimination gap yet achieve binary detection F1 below Claude Sonnet 4 and Gemini 2.5 Flash, because the dominant forensic signal — arithmetic errors — is verifiable by LLMs but invisible to visual inspection.

GPT4o-Receipt

Novel technique introduced


Can humans detect AI-generated financial documents better than machines? We present GPT4o-Receipt, a benchmark of 1,235 receipt images pairing GPT-4o-generated receipts with authentic ones from established datasets, evaluated by five state-of-the-art multimodal LLMs and a 30-annotator crowdsourced perceptual study. Our findings reveal a striking paradox: humans are better at seeing AI artifacts, yet worse at detecting AI documents. Human annotators exhibit the largest visual discrimination gap of any evaluator, yet their binary detection F1 falls well below Claude Sonnet 4 and below Gemini 2.5 Flash. This paradox resolves once the mechanism is understood: the dominant forensic signals in AI-generated receipts are arithmetic errors -- invisible to visual inspection but systematically verifiable by LLMs. Humans cannot perceive that a subtotal is incorrect; LLMs verify it in milliseconds. Beyond the human--LLM comparison, our five-model evaluation reveals dramatic performance disparities and calibration differences that render simple accuracy metrics insufficient for detector selection. GPT4o-Receipt, the evaluation framework, and all results are released publicly to support future research in AI document forensics.


Key Contributions

  • GPT4o-Receipt: a dataset of 1,235 paired AI-generated and authentic receipt images with a public evaluation framework for AI document forensics
  • Discovery that arithmetic errors — not visual artifacts — are the dominant forensic signal in AI-generated receipts, explaining why LLMs outperform human annotators despite humans having a larger visual realism discrimination gap
  • Multi-model evaluation revealing dramatic performance disparities and calibration differences, arguing simple accuracy is insufficient for AI document detector selection

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated content detection and output authenticity verification. The paper proposes a forensic benchmark for distinguishing real vs. AI-generated financial documents and identifies arithmetic consistency as a novel, domain-specific forensic signal — a methodological contribution to AI output integrity detection.


Details

Domains
visionnlpmultimodal
Model Types
vlmllm
Threat Tags
inference_time
Datasets
GPT4o-ReceiptSROIERVL-CDIP
Applications
document forensicsfinancial document authenticationai-generated document detection