defense 2025

Zubov-Net: Adaptive Stability for Neural ODEs Reconciling Accuracy with Robustness

Chaoyang Luo 1, Yan Zou 2, Nanjing Huang 1

0 citations · 53 references · arXiv

α

Published on arXiv

2509.21879

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Zubov-Net maintains high classification accuracy while significantly improving robustness against stochastic noise and adversarial attacks with formal Lyapunov stability guarantees.

Zubov-Net

Novel technique introduced


Despite neural ordinary differential equations (Neural ODEs) exhibiting intrinsic robustness under input perturbations due to their dynamical systems nature, recent approaches often involve imposing Lyapunov-based stability conditions to provide formal robustness guarantees. However, a fundamental challenge remains: the tension between robustness and accuracy, primarily stemming from the difficulty in imposing appropriate stability conditions. To address this, we propose an adaptive stable learning framework named Zubov-Net, which innovatively reformulates Zubov's equation into a consistency characterization between regions of attraction (RoAs) and prescribed RoAs (PRoAs). Building on this consistency, we introduce a new paradigm for actively controlling the geometry of RoAs by directly optimizing PRoAs to reconcile accuracy and robustness. Our approach is realized through tripartite losses (consistency, classification, and separation losses) and a parallel boundary sampling algorithm that co-optimizes the Neural ODE and the Lyapunov function. To enhance the discriminativity of Lyapunov functions, we design an input-attention-based convex neural network via a softmax attention mechanism that focuses on equilibrium-relevant features and also serves as weight normalization to maintain training stability in deep architectures. Theoretically, we prove that minimizing the tripartite loss guarantees consistent alignment of PRoAs-RoAs, trajectory stability, and non-overlapping PRoAs. Moreover, we establish stochastic convex separability with tighter probability bounds and fewer dimensionality requirements to justify the convex design in Lyapunov functions. Experimentally, Zubov-Net maintains high classification accuracy while significantly improving robustness against various stochastic noises and adversarial attacks.


Key Contributions

  • Reformulation of Zubov's equation into a consistency characterization between Regions of Attraction (RoAs) and Prescribed RoAs (PRoAs), enabling adaptive geometry control.
  • Tripartite loss (consistency, classification, separation) with a parallel boundary sampling algorithm that co-optimizes the Neural ODE and a Lyapunov function.
  • Input-attention-based convex neural network as a Lyapunov function with theoretical guarantees on PRoAs-RoAs alignment and stochastic convex separability.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a defense framework (Zubov-Net) providing formal robustness guarantees for Neural ODEs against adversarial input perturbations, evaluated against adversarial attacks at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_timewhite_box
Applications
image classification