LDLT $\mathcal{L}$-Lipschitz Network: Generalized Deep End-To-End Lipschitz Network Construction
Marius F.R. Juston 1, Ramavarapu S. Sreenivas 1, Dustin Nottage 2, Ahmet Soylemezoglu 2
Published on arXiv
2512.05915
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
The LDL^T formulation is a tight relaxation of SDP-based networks, achieving 3%-13% accuracy gains over SLL Layers on 121 UCI datasets while guaranteeing provable Lipschitz bounds.
LDLT L-Lipschitz Network
Novel technique introduced
Deep residual networks (ResNets) have demonstrated outstanding success in computer vision tasks, attributed to their ability to maintain gradient flow through deep architectures. Simultaneously, controlling the Lipschitz constant in neural networks has emerged as an essential area of research to enhance adversarial robustness and network certifiability. This paper presents a rigorous approach to the general design of $\mathcal{L}$-Lipschitz deep residual networks using a Linear Matrix Inequality (LMI) framework. Initially, the ResNet architecture was reformulated as a cyclic tridiagonal LMI, and closed-form constraints on network parameters were derived to ensure $\mathcal{L}$-Lipschitz continuity; however, using a new $LDL^\top$ decomposition approach for certifying LMI feasibility, we extend the construction of $\mathcal{L}$-Lipchitz networks to any other nonlinear architecture. Our contributions include a provable parameterization methodology for constructing Lipschitz-constrained residual networks and other hierarchical architectures. Cholesky decomposition is also used for efficient parameterization. These findings enable robust network designs applicable to adversarial robustness, certified training, and control systems. The $LDL^\top$ formulation is shown to be a tight relaxation of the SDP-based network, maintaining full expressiveness and achieving 3\%-13\% accuracy gains over SLL Layers on 121 UCI data sets.
Key Contributions
- Reformulates ResNet architectures as cyclic tridiagonal Linear Matrix Inequalities (LMIs) and derives closed-form Lipschitz constraints for deep residual modules
- Introduces an LDL^T block decomposition approach that extends Lipschitz certification to arbitrary nonlinear architectures beyond single-layer ResNets
- Demonstrates 3%-13% accuracy gains over SLL Layers on 121 UCI datasets while maintaining provable L-Lipschitz continuity
🛡️ Threat Analysis
Certified Lipschitz constraints on neural networks are a defense against adversarial input manipulation — bounding the network's sensitivity to perturbations provides provable robustness guarantees at inference time.