defense arXiv Dec 23, 2025 · Dec 2025
Yizhak Yisrael Elboher, Avraham Raviv, Amihay Elboher et al. · The Hebrew University of Jerusalem · Bar Ilan University +2 more
Formal verification framework for early exit neural networks that certifies local robustness and improves verification efficiency
Input Manipulation Attack visionnlp
Ensuring the safety and efficiency of AI systems is a central goal of modern research. Formal verification provides guarantees of neural network robustness, while early exits improve inference efficiency by enabling intermediate predictions. Yet verifying networks with early exits introduces new challenges due to their conditional execution paths. In this work, we define a robustness property tailored to early exit architectures and show how off-the-shelf solvers can be used to assess it. We present a baseline algorithm, enhanced with an early stopping strategy and heuristic optimizations that maintain soundness and completeness. Experiments on multiple benchmarks validate our framework's effectiveness and demonstrate the performance gains of the improved algorithm. Alongside the natural inference acceleration provided by early exits, we show that they also enhance verifiability, enabling more queries to be solved in less time compared to standard networks. Together with a robustness analysis, we show how these metrics can help users navigate the inherent trade-off between accuracy and efficiency.
cnn transformer The Hebrew University of Jerusalem · Bar Ilan University · Ben-Gurion University of the Negev +1 more
benchmark arXiv Jan 9, 2026 · 12w ago
G M Shahariar, Zabir Al Nazi, Md Olid Hasan Bhuiyan et al. · University of California
Benchmarks PII leakage across 18 VLMs using 4,000 probes, revealing a high-visibility privacy gap where famous subjects' data leaks more
Sensitive Information Disclosure multimodalnlp
Vision Language Models (VLMs) are increasingly integrated into privacy-critical domains, yet existing evaluations of personally identifiable information (PII) leakage largely treat privacy as a static extraction task and ignore how a subject's online presence--the volume of their data available online--influences privacy alignment. We introduce PII-VisBench, a novel benchmark containing 4000 unique probes designed to evaluate VLM safety through the continuum of online presence. The benchmark stratifies 200 subjects into four visibility categories: high, medium, low, and zero--based on the extent and nature of their information available online. We evaluate 18 open-source VLMs (0.3B-32B) based on two key metrics: percentage of PII probing queries refused (Refusal Rate) and the fraction of non-refusal responses flagged for containing PII (Conditional PII Disclosure Rate). Across models, we observe a consistent pattern: refusals increase and PII disclosures decrease (9.10% high to 5.34% low) as subject visibility drops. We identify that models are more likely to disclose PII for high-visibility subjects, alongside substantial model-family heterogeneity and PII-type disparities. Finally, paraphrasing and jailbreak-style prompts expose attack and model-dependent failures, motivating visibility-aware safety evaluation and training interventions.
vlm University of California