defense 2025

Lipschitz-Based Robustness Certification for Recurrent Neural Networks via Convex Relaxation

Paul Hamelbeck , Johannes Schiffer

0 citations · 40 references · arXiv

α

Published on arXiv

2509.17898

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

RNN-SDP produces certifiably tight Lipschitz upper bounds for RNNs that remain meaningful as sequence length increases, though incorporating input constraints yields only modest further tightening.

RNN-SDP

Novel technique introduced


Robustness certification against bounded input noise or adversarial perturbations is increasingly important for deployment recurrent neural networks (RNNs) in safety-critical control applications. To address this challenge, we present RNN-SDP, a relaxation based method that models the RNN's layer interactions as a convex problem and computes a certified upper bound on the Lipschitz constant via semidefinite programming (SDP). We also explore an extension that incorporates known input constraints to further tighten the resulting Lipschitz bounds. RNN-SDP is evaluated on a synthetic multi-tank system, with upper bounds compared to empirical estimates. While incorporating input constraints yields only modest improvements, the general method produces reasonably tight and certifiable bounds, even as sequence length increases. The results also underscore the often underestimated impact of initialization errors, an important consideration for applications where models are frequently re-initialized, such as model predictive control (MPC).


Key Contributions

  • RNN-SDP: a semidefinite programming relaxation that models RNN layer interactions as a convex problem to compute certified upper bounds on the Lipschitz constant
  • Extension incorporating known input constraints to tighten Lipschitz bounds beyond unconstrained baseline
  • Empirical analysis showing that initialization errors have a significant and often underestimated impact on certified robustness bounds

🛡️ Threat Analysis

Input Manipulation Attack

The paper's primary contribution is RNN-SDP, a certified robustness defense that provably bounds worst-case output change under bounded input perturbations or adversarial noise — directly addressing adversarial input manipulation at inference time for RNNs.


Details

Domains
timeseries
Model Types
rnn
Threat Tags
white_boxinference_timeuntargeted
Datasets
synthetic multi-tank system
Applications
safety-critical control systemsmodel predictive controlsystem identification