defense 2025

ECLipsE-Gen-Local: Efficient Compositional Local Lipschitz Estimates for Deep Neural Networks

Yuezhu Xu , S. Sivaranjani

0 citations · 53 references · arXiv

α

Published on arXiv

2510.05261

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

ECLipsE-Gen-Local achieves substantially tighter local Lipschitz bounds than global approaches with linear computational complexity in network depth, with bounds approaching the exact autodiff Jacobian as the input region shrinks.

ECLipsE-Gen-Local

Novel technique introduced


The Lipschitz constant is a key measure for certifying the robustness of neural networks to input perturbations. However, computing the exact constant is NP-hard, and standard approaches to estimate the Lipschitz constant involve solving a large matrix semidefinite program (SDP) that scales poorly with network size. Further, there is a potential to efficiently leverage local information on the input region to provide tighter Lipschitz estimates. We address this problem here by proposing a compositional framework that yields tight yet scalable Lipschitz estimates for deep feedforward neural networks. Specifically, we begin by developing a generalized SDP framework that is highly flexible, accommodating heterogeneous activation function slope, and allowing Lipschitz estimates with respect to arbitrary input-output pairs and arbitrary choices of sub-networks of consecutive layers. We then decompose this generalized SDP into a sequence of small sub-problems, with computational complexity that scales linearly with respect to the network depth. We also develop a variant that achieves near-instantaneous computation through closed-form solutions to each sub-problem. All our algorithms are accompanied by theoretical guarantees on feasibility and validity. Next, we develop a series of algorithms, termed as ECLipsE-Gen-Local, that effectively incorporate local information on the input. Our experiments demonstrate that our algorithms achieve substantial speedups over a multitude of benchmarks while producing significantly tighter Lipschitz bounds than global approaches. Moreover, we show that our algorithms provide strict upper bounds for the Lipschitz constant with values approaching the exact Jacobian from autodiff when the input region is small enough. Finally, we demonstrate the practical utility of our approach by showing that our Lipschitz estimates closely align with network robustness.


Key Contributions

  • Generalized SDP framework for Lipschitz estimation accommodating heterogeneous per-neuron activation slope bounds and arbitrary input-output/sub-network selections
  • Compositional decomposition of the generalized SDP into sequentially-solved small sub-problems with linear complexity in network depth, including a closed-form near-instantaneous variant
  • ECLipsE-Gen-Local algorithms that incorporate local input-region information to produce provably tighter Lipschitz bounds approaching the exact Jacobian for small input regions

🛡️ Threat Analysis

Input Manipulation Attack

Directly provides certified robustness guarantees against adversarial input perturbations by computing tight Lipschitz upper bounds — the paper's stated goal is certifying resilience to adversarial attacks and providing robustness certificates for neural networks.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timedigital
Applications
neural network robustness certificationsafety-critical control systemsautonomous drivingimage classification