defense 2025

Countering adversarial evasion in regression analysis

David Benfield , Phan Tu Vuong , Alain Zemkoho

0 citations · 40 references · arXiv

α

Published on arXiv

2509.22113

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

A pessimistic bilevel optimization model for adversarial evasion in regression makes no convexity assumptions on the adversary and constrains adversarial movement, yielding more realistic and resilient predictors than prior classification-focused formulations.

Pessimistic Bilevel Optimization for Adversarial Regression

Novel technique introduced


Adversarial machine learning challenges the assumption that the underlying distribution remains consistent throughout the training and implementation of a prediction model. In particular, adversarial evasion considers scenarios where adversaries adapt their data to influence particular outcomes from established prediction models, such scenarios arise in applications such as spam email filtering, malware detection and fake-image generation, where security methods must be actively updated to keep up with the ever-improving generation of malicious data. Game theoretic models have been shown to be effective at modelling these scenarios and hence training resilient predictors against such adversaries. Recent advancements in the use of pessimistic bilevel optimsiation which remove assumptions about the convexity and uniqueness of the adversary's optimal strategy have proved to be particularly effective at mitigating threats to classifiers due to its ability to capture the antagonistic nature of the adversary. However, this formulation has not yet been adapted to regression scenarios. This article serves to propose a pessimistic bilevel optimisation program for regression scenarios which makes no assumptions on the convexity or uniqueness of the adversary's solutions.


Key Contributions

  • Extends pessimistic bilevel optimization (Stackelberg leader-follower games) from classification to regression adversarial evasion scenarios
  • Removes convexity and uniqueness assumptions on the adversary's optimal strategy, better capturing antagonistic behavior
  • Introduces lower-level constraints on adversary movement to prevent unrealistic drastic data transformations

🛡️ Threat Analysis

Input Manipulation Attack

Directly addresses adversarial evasion — adversaries modifying input data at inference time to manipulate regression model outputs. The paper trains resilient predictors via a Stackelberg game formulation, which is a defense against input manipulation attacks.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
inference_timegrey_box
Applications
regression analysisspam email filteringmalware detection