α

Published on arXiv

2510.15699

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

CAPX achieves 2.90%–47.90% absolute gains in attack success rate over the strongest baseline while reducing runtime by at least 45x across all evaluated domains.

CAP / CAPX (Constrained Adversarial Perturbation)

Novel technique introduced


Deep neural networks have achieved remarkable success in a wide range of classification tasks. However, they remain highly susceptible to adversarial examples - inputs that are subtly perturbed to induce misclassification while appearing unchanged to humans. Among various attack strategies, Universal Adversarial Perturbations (UAPs) have emerged as a powerful tool for both stress testing model robustness and facilitating scalable adversarial training. Despite their effectiveness, most existing UAP methods neglect domain specific constraints that govern feature relationships. Violating such constraints, such as debt to income ratios in credit scoring or packet flow invariants in network communication, can render adversarial examples implausible or easily detectable, thereby limiting their real world applicability. In this work, we advance universal adversarial attacks to constrained feature spaces by formulating an augmented Lagrangian based min max optimization problem that enforces multiple, potentially complex constraints of varying importance. We propose Constrained Adversarial Perturbation (CAP), an efficient algorithm that solves this problem using a gradient based alternating optimization strategy. We evaluate CAP across diverse domains including finance, IT networks, and cyber physical systems, and demonstrate that it achieves higher attack success rates while significantly reducing runtime compared to existing baselines. Our approach also generalizes seamlessly to individual adversarial perturbations, where we observe similar strong performance gains. Finally, we introduce a principled procedure for learning feature constraints directly from data, enabling broad applicability across domains with structured input spaces.


Key Contributions

  • Formulates constrained adversarial perturbation as an augmented Lagrangian min-max optimization problem that enforces multiple domain-specific feature constraints of varying importance
  • Proposes CAPX, a GPU-parallelizable universal adversarial perturbation algorithm using gradient-based alternating optimization, achieving 45x+ runtime reduction over existing baselines
  • Introduces a data-driven procedure for learning feature constraints directly from data, extending applicability to domains without explicit domain-knowledge constraints

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a novel universal adversarial perturbation algorithm (CAPX) that generates constraint-satisfying adversarial inputs to induce misclassification at inference time; the primary contribution is a new gradient-based attack method for constrained feature spaces across finance, network, and CPS domains.


Details

Domains
tabular
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
loan processing datasetsIT network traffic datasetscyber-physical systems datasetsmedical diagnostics datasets
Applications
credit scoringnetwork intrusion detectionanomaly detection in cyber-physical systemsmedical diagnostics