benchmark 2025

On Integer Programming for the Binarized Neural Network Verification Problem

Woojin Kim , James R. Luedtke

0 citations · 23 references · arXiv

α

Published on arXiv

2510.01525

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

The proposed IP improvements enable verifying BNN robustness against a strictly higher range of input perturbations than existing integer programming approaches within a fixed time limit.

Valid Inequality Generation for BNN IP Verification

Novel technique introduced


Binarized neural networks (BNNs) are feedforward neural networks with binary weights and activation functions. In the context of using a BNN for classification, the verification problem seeks to determine whether a small perturbation of a given input can lead it to be misclassified by the BNN, and the robustness of the BNN can be measured by solving the verification problem over multiple inputs. The BNN verification problem can be formulated as an integer programming (IP) problem. However, the natural IP formulation is often challenging to solve due to a large integrality gap induced by big-$M$ constraints. We present two techniques to improve the IP formulation. First, we introduce a new method for obtaining a linear objective for the multi-class setting. Second, we introduce a new technique for generating valid inequalities for the IP formulation that exploits the recursive structure of BNNs. We find that our techniques enable verifying BNNs against a higher range of input perturbation than existing IP approaches within a limited time.


Key Contributions

  • New method for obtaining a linear objective function for the multi-class BNN verification IP problem
  • Novel valid inequality generation technique that exploits the recursive structure of BNNs to tighten the IP relaxation
  • Demonstrates that the combined techniques verify BNNs against larger perturbation radii than existing IP approaches within the same time budget

🛡️ Threat Analysis

Input Manipulation Attack

The verification problem is precisely about determining whether adversarial examples exist within a given perturbation budget for binarized neural networks — the paper improves the tractability of finding or ruling out such adversarial inputs via better IP formulations.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timedigital
Applications
image classificationneural network verification