benchmark 2025

Non-Parametric Probabilistic Robustness: A Conservative Metric with Optimized Perturbation Distributions

Zheng Wang , Yi Zhang , Siddartha Khastgir , Carsten Maple , Xingyu Zhao

0 citations · 39 references · arXiv

α

Published on arXiv

2511.17380

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

NPPR yields up to 40% more conservative (lower) probabilistic robustness estimates than methods that assume common fixed perturbation distributions, making it a stricter practical robustness metric.

NPPR (Non-Parametric Probabilistic Robustness)

Novel technique introduced


Deep learning (DL) models, despite their remarkable success, remain vulnerable to small input perturbations that can cause erroneous outputs, motivating the recent proposal of probabilistic robustness (PR) as a complementary alternative to adversarial robustness (AR). However, existing PR formulations assume a fixed and known perturbation distribution, an unrealistic expectation in practice. To address this limitation, we propose non-parametric probabilistic robustness (NPPR), a more practical PR metric that does not rely on any predefined perturbation distribution. Following the non-parametric paradigm in statistical modeling, NPPR learns an optimized perturbation distribution directly from data, enabling conservative PR evaluation under distributional uncertainty. We further develop an NPPR estimator based on a Gaussian Mixture Model (GMM) with Multilayer Perceptron (MLP) heads and bicubic up-sampling, covering various input-dependent and input-independent perturbation scenarios. Theoretical analyses establish the relationships among AR, PR, and NPPR. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet across ResNet18/50, WideResNet50 and VGG16 validate NPPR as a more practical robustness metric, showing up to 40\% more conservative (lower) PR estimates compared to assuming those common perturbation distributions used in state-of-the-arts.


Key Contributions

  • Introduces NPPR, a non-parametric probabilistic robustness metric that learns an optimized perturbation distribution directly from data without assuming a fixed prior distribution
  • Develops a GMM+MLP estimator with bicubic upsampling supporting both input-dependent and input-independent perturbation scenarios
  • Provides theoretical analysis of the relationships among adversarial robustness, standard probabilistic robustness, and NPPR, validated on CIFAR-10/100 and Tiny ImageNet across five CNN architectures

🛡️ Threat Analysis

Input Manipulation Attack

NPPR is an evaluation framework specifically for adversarial/probabilistic robustness — it learns an optimized (worst-case) perturbation distribution to conservatively bound a model's vulnerability to input perturbations, the core threat modeled by ML01.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxdigitalinference_time
Datasets
CIFAR-10CIFAR-100Tiny ImageNet
Applications
image classification