benchmark 2026

On damage of interpolation to adversarial robustness in regression

Jingfu Peng , Yuhong Yang

0 citations · 88 references · arXiv

α

Published on arXiv

2601.16070

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Interpolating estimators in the high interpolation regime are provably suboptimal under adversarial X-attacks, with adversarial risk potentially failing to converge even under diminishing perturbations, while increasing sample size can worsen robustness.


Deep neural networks (DNNs) typically involve a large number of parameters and are trained to achieve zero or near-zero training error. Despite such interpolation, they often exhibit strong generalization performance on unseen data, a phenomenon that has motivated extensive theoretical investigations. Comforting results show that interpolation indeed may not affect the minimax rate of convergence under the squared error loss. In the mean time, DNNs are well known to be highly vulnerable to adversarial perturbations in future inputs. A natural question then arises: Can interpolation also escape from suboptimal performance under a future $X$-attack? In this paper, we investigate the adversarial robustness of interpolating estimators in a framework of nonparametric regression. A finding is that interpolating estimators must be suboptimal even under a subtle future $X$-attack, and achieving perfect fitting can substantially damage their robustness. An interesting phenomenon in the high interpolation regime, which we term the curse of simple size, is also revealed and discussed. Numerical experiments support our theoretical findings.


Key Contributions

  • Establishes minimax adversarial L2-risk theory for the class of interpolating estimators in nonparametric regression, proving they are provably suboptimal under adversarial X-attacks
  • Reveals a 'curse of sample size' phenomenon: in high interpolation regimes, more training data deteriorates adversarial robustness of interpolators
  • Shows that any interpolating method (including over-parameterized DNNs achieving near-zero training error) cannot attain optimal adversarial robustness, identifying interpolation as a structural cause of adversarial vulnerability

🛡️ Threat Analysis

Input Manipulation Attack

The paper directly investigates adversarial robustness — specifically X-attacks (inference-time adversarial perturbations to future inputs) — and establishes minimax adversarial risk bounds showing that interpolating estimators, including over-parameterized DNNs, cannot attain optimal robustness against such attacks.


Details

Domains
tabular
Model Types
cnntraditional_ml
Threat Tags
inference_timewhite_box
Datasets
Simulation experimentsReal data example (unspecified in abstract/body excerpt)
Applications
nonparametric regressiondeep neural networks