defense 2025

Adversarial Robustness in One-Stage Learning-to-Defer

Yannis Montreuil 1, Letian Yu 1, Axel Carlier 2, Lai Xing Ng 3, Wei Tsang Ooi 1

1 citations · 50 references · arXiv

α

Published on arXiv

2510.10988

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Proposed adversarial surrogate losses improve robustness against both untargeted and targeted attacks in one-stage L2D while preserving clean performance on benchmark datasets.


Learning-to-Defer (L2D) enables hybrid decision-making by routing inputs either to a predictor or to external experts. While promising, L2D is highly vulnerable to adversarial perturbations, which can not only flip predictions but also manipulate deferral decisions. Prior robustness analyses focus solely on two-stage settings, leaving open the end-to-end (one-stage) case where predictor and allocation are trained jointly. We introduce the first framework for adversarial robustness in one-stage L2D, covering both classification and regression. Our approach formalizes attacks, proposes cost-sensitive adversarial surrogate losses, and establishes theoretical guarantees including $\mathcal{H}$, $(\mathcal{R }, \mathcal{F})$, and Bayes consistency. Experiments on benchmark datasets confirm that our methods improve robustness against untargeted and targeted attacks while preserving clean performance.


Key Contributions

  • First framework for adversarial robustness in one-stage (jointly trained) Learning-to-Defer systems, covering both classification and regression
  • Cost-sensitive adversarial surrogate losses that account for both prediction and deferral manipulation
  • Theoretical guarantees including H, (R, F), and Bayes consistency for the proposed robust L2D framework

🛡️ Threat Analysis

Input Manipulation Attack

The paper directly addresses adversarial perturbations at inference time that flip predictions and manipulate deferral decisions in L2D systems, proposing cost-sensitive adversarial surrogate losses as defenses against both untargeted and targeted attacks.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
white_boxinference_timetargeteduntargeteddigital
Applications
hybrid human-ai decision makinglearning-to-defer systems