defense 2025

Adversarial training with restricted data manipulation

David Benfield 1, Stefano Coniglio 2, Phan Tu Vuong 1, Alain Zemkoho 1

0 citations · 36 references · arXiv

α

Published on arXiv

2510.03254

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Constrained pessimistic bilevel adversarial training outperforms unrestricted bilevel approaches on average by producing more realistic adversarial data that better reflects real-world attack conditions.

Constrained Pessimistic Bilevel Optimisation

Novel technique introduced


Adversarial machine learning concerns situations in which learners face attacks from active adversaries. Such scenarios arise in applications such as spam email filtering, malware detection and fake image generation, where security methods must be actively updated to keep up with the everimproving generation of malicious data. Pessimistic Bilevel optimisation has been shown to be an effective method of training resilient classifiers against such adversaries. By modelling these scenarios as a game between the learner and the adversary, we anticipate how the adversary will modify their data and then train a resilient classifier accordingly. However, since existing pessimistic bilevel approaches feature an unrestricted adversary, the model is vulnerable to becoming overly pessimistic and unrealistic. When finding the optimal solution that defeats the classifier, it is possible that the adversary's data becomes nonsensical and loses its intended nature. Such an adversary will not properly reflect reality, and consequently, will lead to poor classifier performance when implemented on real-world data. By constructing a constrained pessimistic bilevel optimisation model, we restrict the adversary's movements and identify a solution that better reflects reality. We demonstrate through experiments that this model performs, on average, better than the existing approach.


Key Contributions

  • Constrained pessimistic bilevel optimization model that restricts the adversary's data manipulation to remain realistic and semantically meaningful
  • Game-theoretic formulation of adversarial ML that avoids overly pessimistic solutions produced by unrestricted bilevel approaches
  • Empirical demonstration that the constrained model outperforms the unrestricted baseline on average across text-based and image-based tasks

🛡️ Threat Analysis

Input Manipulation Attack

The paper addresses exploratory evasion attacks where adversaries modify data at inference/deployment time to evade classifiers (spam, malware, image classification). The core contribution is a defense via adversarial training — a constrained pessimistic bilevel optimization that anticipates and restricts adversarial data manipulation to train robust classifiers.


Details

Domains
visionnlptabular
Model Types
traditional_ml
Threat Tags
inference_timewhite_box
Applications
spam email filteringmalware detectionimage classification