defense 2025

Adversarial generalization of unfolding (model-based) networks

Vicky Kouni

0 citations

α

Published on arXiv

2509.15370

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarial generalization error of overparameterized DUNs scales as √(NL·log(1+ε)); overparameterization via overcomplete learnable sparsifiers provably and empirically promotes adversarial robustness

Adversarial Rademacher Complexity for Deep Unfolding Networks

Novel technique introduced


Unfolding networks are interpretable networks emerging from iterative algorithms, incorporate prior knowledge of data structure, and are designed to solve inverse problems like compressed sensing, which deals with recovering data from noisy, missing observations. Compressed sensing finds applications in critical domains, from medical imaging to cryptography, where adversarial robustness is crucial to prevent catastrophic failures. However, a solid theoretical understanding of the performance of unfolding networks in the presence of adversarial attacks is still in its infancy. In this paper, we study the adversarial generalization of unfolding networks when perturbed with $l_2$-norm constrained attacks, generated by the fast gradient sign method. Particularly, we choose a family of state-of-the-art overaparameterized unfolding networks and deploy a new framework to estimate their adversarial Rademacher complexity. Given this estimate, we provide adversarial generalization error bounds for the networks under study, which are tight with respect to the attack level. To our knowledge, this is the first theoretical analysis on the adversarial generalization of unfolding networks. We further present a series of experiments on real-world data, with results corroborating our derived theory, consistently for all data. Finally, we observe that the family's overparameterization can be exploited to promote adversarial robustness, shedding light on how to efficiently robustify neural networks.


Key Contributions

  • First theoretical adversarial generalization error bounds for deep unfolding networks (DUNs) under FGSM attacks, derived via a novel adversarial Rademacher complexity (ARC) estimation framework
  • Proof that adversarial generalization error of the examined DUN scales as √(NL·log(1+ε)), exposing how overcompleteness N, depth L, and attack level ε jointly impact robustness
  • Empirical validation showing overparameterized overcomplete sparsifiers consistently outperform orthogonal baselines in adversarial robustness across all tested data

🛡️ Threat Analysis

Input Manipulation Attack

Paper analyzes adversarial examples (FGSM, l2-norm constrained) applied to deep unfolding networks, provides adversarial Rademacher complexity estimates and adversarial generalization error bounds, and derives design principles for promoting adversarial robustness — this is directly about input manipulation attacks and certified defenses against them.


Details

Threat Tags
white_boxinference_timeuntargeteddigital
Applications
compressed sensingmedical imaging (mri reconstruction)signal recovery