E-Globe: Scalable $ε$-Global Verification of Neural Networks via Tight Upper Bounds and Pattern-Aware Branching
Wenting Li 1, Saif R. Kazi 2, Russell Bent 2, Duo Zhou 3, Huan Zhang 3
Published on arXiv
2602.05068
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves markedly tighter upper bounds than PGD across perturbation radii spanning three orders of magnitude and substantial end-to-end speedups over MIP-based verification on MNIST and CIFAR-10
E-Globe
Novel technique introduced
Neural networks achieve strong empirical performance, but robustness concerns still hinder deployment in safety-critical applications. Formal verification provides robustness guarantees, but current methods face a scalability-completeness trade-off. We propose a hybrid verifier in a branch-and-bound (BaB) framework that efficiently tightens both upper and lower bounds until an $ε-$global optimum is reached or early stop is triggered. The key is an exact nonlinear program with complementarity constraints (NLP-CC) for upper bounding that preserves the ReLU input-output graph, so any feasible solution yields a valid counterexample and enables rapid pruning of unsafe subproblems. We further accelerate verification with (i) warm-started NLP solves requiring minimal constraint-matrix updates and (ii) pattern-aligned strong branching that prioritizes splits most effective at tightening relaxations. We also provide conditions under which NLP-CC upper bounds are tight. Experiments on MNIST and CIFAR-10 show markedly tighter upper bounds than PGD across perturbation radii spanning up to three orders of magnitude, fast per-node solves in practice, and substantial end-to-end speedups over MIP-based verification, amplified by warm-starting, GPU batching, and pattern-aligned branching.
Key Contributions
- NLP-CC (nonlinear program with complementarity constraints) for tight upper bounding that preserves the ReLU input-output graph, enabling valid counterexamples from any feasible solution
- Warm-started NLP solves with minimal constraint-matrix updates and pattern-aligned strong branching to accelerate branch-and-bound verification
- Theoretical conditions under which NLP-CC upper bounds are tight, with experiments demonstrating substantially tighter upper bounds than PGD and speedups over MIP-based verifiers
🛡️ Threat Analysis
The paper's primary contribution is a complete/certified robustness verification framework — it formally certifies that no adversarial example exists within a given perturbation radius ε, or produces a valid counterexample. Certified robustness is explicitly a defense against input manipulation attacks.