benchmark 2026

PII-VisBench: Evaluating Personally Identifiable Information Safety in Vision Language Models Along a Continuum of Visibility

G M Shahariar , Zabir Al Nazi , Md Olid Hasan Bhuiyan , Zhouxing Shi

0 citations · 37 references · arXiv

α

Published on arXiv

2601.05739

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

VLMs disclose PII nearly twice as often for high-visibility subjects versus low-visibility ones (9.10% vs 5.34% Conditional PII Disclosure Rate), with jailbreak prompts further exposing model-dependent safety failures.

PII-VisBench

Novel technique introduced


Vision Language Models (VLMs) are increasingly integrated into privacy-critical domains, yet existing evaluations of personally identifiable information (PII) leakage largely treat privacy as a static extraction task and ignore how a subject's online presence--the volume of their data available online--influences privacy alignment. We introduce PII-VisBench, a novel benchmark containing 4000 unique probes designed to evaluate VLM safety through the continuum of online presence. The benchmark stratifies 200 subjects into four visibility categories: high, medium, low, and zero--based on the extent and nature of their information available online. We evaluate 18 open-source VLMs (0.3B-32B) based on two key metrics: percentage of PII probing queries refused (Refusal Rate) and the fraction of non-refusal responses flagged for containing PII (Conditional PII Disclosure Rate). Across models, we observe a consistent pattern: refusals increase and PII disclosures decrease (9.10% high to 5.34% low) as subject visibility drops. We identify that models are more likely to disclose PII for high-visibility subjects, alongside substantial model-family heterogeneity and PII-type disparities. Finally, paraphrasing and jailbreak-style prompts expose attack and model-dependent failures, motivating visibility-aware safety evaluation and training interventions.


Key Contributions

  • PII-VisBench: 4,000 probes across 200 subjects stratified into four online-visibility tiers (high/medium/low/zero) with 20 PII attribute types
  • Systematic evaluation of 18 open-source VLMs (0.3B–32B) using Refusal Rate and Conditional PII Disclosure Rate metrics
  • Discovery of a consistent high-visibility privacy gap (9.10% vs 5.34% cPDR) showing pre-training memorization outweighs safety fine-tuning for public figures

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
vlm
Threat Tags
black_boxinference_time
Datasets
PII-VisBench (200 subjects, 4000 probes, 20 PII attributes)
Applications
vision-language model safety evaluationprivacy-critical ai deployment