attack 2025

Boundary on the Table: Efficient Black-Box Decision-Based Attacks for Structured Data

Roie Kazoom , Yuval Ratzabi , Etamar Rothstein , Ofer Hadar

0 citations · 33 references · arXiv

α

Published on arXiv

2509.22850

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves attack success rates consistently above 90% on nearly the entire test set across diverse tabular models while requiring only a small number of queries per instance.

Boundary on the Table

Novel technique introduced


Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a novel black-box, decision-based adversarial attack tailored for tabular data. Our approach combines gradient-free direction estimation with an iterative boundary search, enabling efficient navigation of discrete and continuous feature spaces under minimal oracle access. Extensive experiments demonstrate that our method successfully compromises nearly the entire test set across diverse models, ranging from classical machine learning classifiers to large language model (LLM)-based pipelines. Remarkably, the attack achieves success rates consistently above 90%, while requiring only a small number of queries per instance. These results highlight the critical vulnerability of tabular models to adversarial perturbations, underscoring the urgent need for stronger defenses in real-world decision-making systems.


Key Contributions

  • Novel decision-based black-box adversarial attack designed specifically for tabular (structured) data, addressing a gap versus vision/language domains.
  • Gradient-free direction estimation combined with iterative boundary search enabling efficient navigation of mixed discrete/continuous feature spaces.
  • Empirical demonstration of >90% attack success rate across classical ML classifiers and LLM-based pipelines under minimal oracle query budgets.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a gradient-free, decision-based adversarial attack that crafts malicious tabular inputs to cause misclassification at inference time — a canonical input manipulation attack adapted for structured data.


Details

Domains
tabularnlp
Model Types
traditional_mlllmtransformer
Threat Tags
black_boxinference_timedigital
Applications
tabular data classificationreal-world decision-making systems