Learning to Attack: A Bandit Approach to Adversarial Context Poisoning
Ray Telikani , Amir H. Gandomi
Published on arXiv
2603.00567
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
AdvBandit achieves 2.8× higher cumulative victim regret and 1.7–2.5× improvement in target arm pull ratio compared to state-of-the-art attack baselines across five victim NCB algorithms.
AdvBandit
Novel technique introduced
Neural contextual bandits are vulnerable to adversarial attacks, where subtle perturbations to rewards, actions, or contexts induce suboptimal decisions. We introduce AdvBandit, a black-box adaptive attack that formulates context poisoning as a continuous-armed bandit problem, enabling the attacker to jointly learn and exploit the victim's evolving policy. The attacker requires no access to the victim's internal parameters, reward function, or gradient information; instead, it constructs a surrogate model using a maximum-entropy inverse reinforcement learning module from observed context-action pairs and optimizes perturbations against this surrogate using projected gradient descent. An upper confidence bound-aware Gaussian process guides arm selection. An attack-budget control mechanism is also introduced to limit detection risk and overhead. We provide theoretical guarantees, including sublinear attacker regret and lower bounds on victim regret linear in the number of attacks. Experiments on three real-world datasets (Yelp, MovieLens, and Disin) against various victim contextual bandits demonstrate that our attack model achieves higher cumulative victim regret than state-of-the-art baselines.
Key Contributions
- AdvBandit: formulates context poisoning as a continuous-armed bandit problem, using GP-UCB over a 3D attack-parameter space to guide PGD-computed perturbations against a surrogate victim model
- UCB-aware Maximum Entropy IRL module that reconstructs the victim's reward function from observed context-action pairs alone, enabling fully black-box operation
- Theoretical guarantees: sublinear attacker regret bound and a victim regret lower bound linear in the number of attacks, plus an attack-budget control mechanism for detection evasion
🛡️ Threat Analysis
Core contribution is crafting adversarial perturbations to context inputs (using projected gradient descent) that cause neural contextual bandits to make suboptimal decisions — a gradient-based input manipulation attack at decision/inference time. The use of PGD against a surrogate model is the hallmark of ML01, even in the online learning setting.