defense 2025

Probabilistic Robustness Analysis in High Dimensional Space: Application to Semantic Segmentation Network

Navid Hashemi 1, Samuel Sasaki 1, Diego Manzanas Lopez 1, Lars Lindemann 2, Ipek Oguz 1, Meiyi Ma 1, Taylor T. Johnson 1

0 citations

α

Published on arXiv

2509.11838

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

The proposed clipping block framework delivers provable probabilistic robustness guarantees for segmentation models under general ℓ_p perturbations while substantially reducing conservatism versus prior surrogate-based and randomized smoothing approaches.

Clipping Block with Conformal Inference

Novel technique introduced


Semantic segmentation networks (SSNs) are central to safety-critical applications such as medical imaging and autonomous driving, where robustness under uncertainty is essential. However, existing probabilistic verification methods often fail to scale with the complexity and dimensionality of modern segmentation tasks, producing guarantees that are overly conservative and of limited practical value. We propose a probabilistic verification framework that is architecture-agnostic and scalable to high-dimensional input-output spaces. Our approach employs conformal inference (CI), enhanced by a novel technique that we call the \textbf{clipping block}, to provide provable guarantees while mitigating the excessive conservatism of prior methods. Experiments on large-scale segmentation models across CamVid, OCTA-500, Lung Segmentation, and Cityscapes demonstrate that our framework delivers reliable safety guarantees while substantially reducing conservatism compared to state-of-the-art approaches on segmentation tasks. We also provide a public GitHub repository (https://github.com/Navidhashemicodes/SSN_Reach_CLP_Surrogate) for this approach, to support reproducibility.


Key Contributions

  • Novel clipping block technique that replaces ReLU surrogate models, enabling scalable probabilistic reachability analysis without fidelity concerns or sparse-perturbation restrictions
  • Architecture-agnostic conformal inference framework that scales certified robustness verification to high-dimensional segmentation outputs
  • Substantially reduced conservatism compared to state-of-the-art probabilistic verification methods on CamVid, OCTA-500, Lung Segmentation, and Cityscapes

🛡️ Threat Analysis

Input Manipulation Attack

Provides certified/probabilistic robustness guarantees against adversarial ℓ_p input perturbations for segmentation networks — certified robustness is a core ML01 defense. The clipping block + conformal inference framework verifies that model outputs remain within safe bounds under adversarial perturbations at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CamVidOCTA-500Lung SegmentationCityscapes
Applications
semantic segmentationmedical imagingautonomous driving perception