defense 2026

Counterexample Guided Branching via Directional Relaxation Analysis in Complete Neural Network Verification

Jingyang Li 1, Fu Song 2, Guoqiang Li 1

0 citations

α

Published on arXiv

2603.14823

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Significantly reduces search tree size and verification time compared to established baselines through counterexample-guided branching

DRG-BaB

Novel technique introduced


Deep Neural Networks demonstrate exceptional performance but remain vulnerable to adversarial perturbations, necessitating formal verification for safety-critical deployment. To address the computational complexity of this task, researchers often employ abstraction-refinement techniques that iteratively tighten an over-approximated model. While structural methods utilize Counterexample-Guided Abstraction Refine- ment, state-of-the-art dataflow verifiers typically rely on Branch-and-Bound to refine numerical convex relaxations. However, current dataflow approaches operate with blind refinement processes that rely on static heuristics and fail to leverage specific diagnostic information from verification failures. In this work, we argue that Branch-and-Bound should be reformulated as a Dataflow CEGAR loop where the spurious counterexample serves as a precise witness to local abstraction errors. We propose DRG-BaB, a framework that introduces the Directional Relaxation Gap heuristic to prioritize branching on neurons actively contributing to falsification in the abstract domain. By deriving a closed-form spurious counterexample directly from linear bounds, our method transforms generic search into targeted refinement. Experiments on high-dimensional benchmarks demonstrate that this approach significantly reduces search tree size and verification time compared to established baselines.


Key Contributions

  • Reformulates Branch-and-Bound neural network verification as a Dataflow CEGAR loop using spurious counterexamples as witnesses to abstraction errors
  • Introduces Directional Relaxation Gap (DRG) heuristic that identifies neurons actively contributing to verification failures for targeted refinement
  • Demonstrates significant reduction in search tree size and verification time on MNIST and CIFAR-10 benchmarks compared to existing geometric and gradient-based baselines

🛡️ Threat Analysis

Input Manipulation Attack

Paper addresses formal verification of neural networks against adversarial perturbations. While it's a verification/defense method rather than an attack, it defends against adversarial examples by providing mathematical guarantees of robustness. The paper explicitly mentions vulnerability to adversarial perturbations and aims to prove safety properties against such inputs.


Details

Domains
vision
Model Types
cnn
Threat Tags
inference_timedigital
Datasets
MNISTCIFAR-10
Applications
image classificationneural network verification