defense 2026

Cascading Robustness Verification: Toward Efficient Model-Agnostic Certification

Mohammadreza Maleki 1, Rushendra Sidibomma 2, Arman Adibi 3, Reza Samavi 1,4

0 citations · 27 references · arXiv (Cornell University)

α

Published on arXiv

2602.04236

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

CRV certifies at least as many inputs as the strongest single benchmark verifier while reducing verification runtime by up to ~90%.

Cascading Robustness Verification (CRV)

Novel technique introduced


Certifying neural network robustness against adversarial examples is challenging, as formal guarantees often require solving non-convex problems. Hence, incomplete verifiers are widely used because they scale efficiently and substantially reduce the cost of robustness verification compared to complete methods. However, relying on a single verifier can underestimate robustness because of loose approximations or misalignment with training methods. In this work, we propose Cascading Robustness Verification (CRV), which goes beyond an engineering improvement by exposing fundamental limitations of existing robustness metric and introducing a framework that enhances both reliability and efficiency. CRV is a model-agnostic verifier, meaning that its robustness guarantees are independent of the model's training process. The key insight behind the CRV framework is that, when using multiple verification methods, an input is certifiably robust if at least one method certifies it as robust. Rather than relying solely on a single verifier with a fixed constraint set, CRV progressively applies multiple verifiers to balance the tightness of the bound and computational cost. Starting with the least expensive method, CRV halts as soon as an input is certified as robust; otherwise, it proceeds to more expensive methods. For computationally expensive methods, we introduce a Stepwise Relaxation Algorithm (SR) that incrementally adds constraints and checks for certification at each step, thereby avoiding unnecessary computation. Our theoretical analysis demonstrates that CRV achieves equal or higher verified accuracy compared to powerful but computationally expensive incomplete verifiers in the cascade, while significantly reducing verification overhead. Empirical results confirm that CRV certifies at least as many inputs as benchmark approaches, while improving runtime efficiency by up to ~90%.


Key Contributions

  • Cascading Robustness Verification (CRV) framework that sequences multiple incomplete verifiers (LP, SDP, etc.) from cheapest to most expensive, halting as soon as any one certifies an input as robust
  • Stepwise Relaxation Algorithm (SR) that incrementally adds constraints within expensive verifiers to avoid redundant computation
  • Theoretical proof that CRV achieves equal or higher verified accuracy than any single verifier in the cascade, with empirical runtime improvements up to ~90%

🛡️ Threat Analysis

Input Manipulation Attack

The paper's sole contribution is certifying neural network robustness against adversarial perturbations — formal verification that inputs cannot be misclassified within an adversarial perturbation budget. This is a direct defense against input manipulation attacks at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timedigital
Datasets
MNISTCIFAR-10
Applications
image classificationneural network certification