defense 2026

Algebraic Robustness Verification of Neural Networks

Yulia Alexandr 1, Hao Duan 1, Guido Montúfar 1,2

0 citations · 55 references · arXiv (Cornell University)

α

Published on arXiv

2602.06105

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Derives closed-form ED degree expressions for several neural architectures and an exact certification algorithm via homotopy continuation that is provably correct whenever the algebraic problem is solvable.

ED Degree Robustness Certification

Novel technique introduced


We formulate formal robustness verification of neural networks as an algebraic optimization problem. We leverage the Euclidean Distance (ED) degree, which is the generic number of complex critical points of the distance minimization problem to a classifier's decision boundary, as an architecture-dependent measure of the intrinsic complexity of robustness verification. To make this notion operational, we define the associated ED discriminant, which characterizes input points at which the number of real critical points changes, distinguishing test instances that are easier or harder to verify. We provide an explicit algorithm for computing this discriminant. We further introduce the parameter discriminant of a neural network, identifying parameters where the ED degree drops and the decision boundary exhibits reduced algebraic complexity. We derive closed-form expressions for the ED degree for several classes of neural architectures, as well as formulas for the expected number of real critical points in the infinite-width limit. Finally, we present an exact robustness certification algorithm based on numerical homotopy continuation, establishing a concrete link between metric algebraic geometry and neural network verification.


Key Contributions

  • Introduces the Euclidean Distance (ED) degree as an architecture-dependent algebraic measure of intrinsic robustness verification complexity
  • Defines ED discriminant and parameter discriminant to classify test instances and network parameters by verification difficulty, with explicit computation algorithms
  • Presents an exact robustness certification algorithm based on numerical homotopy continuation, connecting metric algebraic geometry to neural network verification

🛡️ Threat Analysis

Input Manipulation Attack

Proposes exact robustness certification — guaranteeing no adversarial input perturbation within a given radius causes misclassification — which is a direct defense against inference-time input manipulation attacks.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_time
Applications
image classificationneural network robustness certification