defense 2025

Robust Convolution Neural ODEs via Contractivity-promoting regularization

Muhammad Zakwan 1,2, Liang Xu 3,4, Giancarlo Ferrari-Trecate 5

0 citations

α

Published on arXiv

2508.11432

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Contractivity-promoting regularization improves CNODE robustness against Gaussian noise, salt-and-pepper noise, FGSM, and PGD attacks on MNIST and FashionMNIST while maintaining competitive clean accuracy across a range of contraction rates.

Contractivity-promoting regularization for Convolutional NODEs

Novel technique introduced


Neural networks can be fragile to input noise and adversarial attacks. In this work, we consider Convolutional Neural Ordinary Differential Equations (NODEs), a family of continuous-depth neural networks represented by dynamical systems, and propose to use contraction theory to improve their robustness. For a contractive dynamical system two trajectories starting from different initial conditions converge to each other exponentially fast. Contractive Convolutional NODEs can enjoy increased robustness as slight perturbations of the features do not cause a significant change in the output. Contractivity can be induced during training by using a regularization term involving the Jacobian of the system dynamics. To reduce the computational burden, we show that it can also be promoted using carefully selected weight regularization terms for a class of NODEs with slope-restricted activation functions. The performance of the proposed regularizers is illustrated through benchmark image classification tasks on MNIST and FashionMNIST datasets, where images are corrupted by different kinds of noise and attacks.


Key Contributions

  • Application of contraction theory to Convolutional Neural ODEs (CNODEs) to promote robustness against adversarial attacks and input noise
  • Jacobian-based regularization term that induces contractivity during training with quantifiable robustness guarantees
  • Computationally efficient weight-regularization alternative for slope-restricted activation functions that avoids Jacobian computation

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes a defense (contractivity-promoting regularization) specifically designed to improve robustness against adversarial input perturbations (FGSM, PGD) and noise at inference time. The contractivity property ensures that input perturbations do not cause significant output changes, directly addressing adversarial example attacks.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
MNISTFashionMNIST
Applications
image classification