attack 2025

Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning

Junnan Qiu 1,2, Yuanjie Zhao 1,2, Jie Li 2

0 citations · 16 references · arXiv

α

Published on arXiv

2512.08485

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Achieves up to 80% performance degradation in offline RL policies with minimal, stealthy perturbations that evade statistical and spectral defense baselines.

Global Budget Allocation (GBA)

Novel technique introduced


Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveraging the theoretical insight that a sample's influence on value function convergence is proportional to its Temporal Difference (TD) error, we formulate the attack as a global resource allocation problem. We derive a closed-form solution where perturbation magnitudes are assigned proportional to the TD-error sensitivity under a global L2 constraint. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms baseline strategies, achieving up to 80% performance degradation with minimal perturbations that evade detection by state-of-the-art statistical and spectral defenses.


Key Contributions

  • Theoretical insight that a sample's influence on value function convergence is proportional to its TD error, motivating non-uniform perturbation allocation
  • Closed-form Global Budget Allocation (GBA) solution assigning perturbation magnitudes proportional to TD-error sensitivity under a global L2 constraint
  • Empirical demonstration of up to 80% performance degradation on D4RL benchmarks while evading state-of-the-art statistical and spectral defenses

🛡️ Threat Analysis

Data Poisoning Attack

The paper directly proposes a training-time data poisoning attack on offline RL datasets, formulating perturbation injection as a global resource allocation problem to degrade value function convergence — no trigger-based targeted backdoor behavior, purely performance degradation via corrupted training data.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
white_boxtraining_timeuntargeteddigital
Datasets
D4RL
Applications
offline reinforcement learningpolicy optimization