attack 2026

CS-GBA: A Critical Sample-based Gradient-guided Backdoor Attack for Offline Reinforcement Learning

Yuanjie Zhao , Junnan Qiu , Yue Ding , Jie Li

0 citations · 32 references · arXiv

α

Published on arXiv

2601.10407

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Achieves high attack success rates against safety-constrained offline RL algorithms (e.g., CQL) with only a 5% dataset poisoning budget while maintaining clean-environment performance.

CS-GBA

Novel technique introduced


Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to backdoor attacks. Existing attack strategies typically struggle against safety-constrained algorithms (e.g., CQL) due to inefficient random poisoning and the use of easily detectable Out-of-Distribution (OOD) triggers. In this paper, we propose CS-GBA (Critical Sample-based Gradient-guided Backdoor Attack), a novel framework designed to achieve high stealthiness and destructiveness under a strict budget. Leveraging the theoretical insight that samples with high Temporal Difference (TD) errors are pivotal for value function convergence, we introduce an adaptive Critical Sample Selection strategy that concentrates the attack budget on the most influential transitions. To evade OOD detection, we propose a Correlation-Breaking Trigger mechanism that exploits the physical mutual exclusivity of state features (e.g., 95th percentile boundaries) to remain statistically concealed. Furthermore, we replace the conventional label inversion with a Gradient-Guided Action Generation mechanism, which searches for worst-case actions within the data manifold using the victim Q-network's gradient. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms state-of-the-art baselines, achieving high attack success rates against representative safety-constrained algorithms with a minimal 5% poisoning budget, while maintaining the agent's performance in clean environments.


Key Contributions

  • Critical Sample Selection strategy that concentrates poisoning budget on high-TD-error transitions, maximizing value function disruption under a 5% budget constraint.
  • Correlation-Breaking Trigger mechanism that exploits physical mutual exclusivity of state features (95th percentile boundaries) to generate in-distribution, OOD-detection-evading triggers.
  • Gradient-Guided Action Generation that uses the victim Q-network's gradients to search for worst-case actions within the data manifold, replacing naive label inversion.

🛡️ Threat Analysis

Data Poisoning Attack

The attack vector is explicit corruption of the static offline training dataset (transitions with modified rewards and actions), making this simultaneously a data poisoning attack on the RL training pipeline.

Model Poisoning

CS-GBA embeds hidden, trigger-activated malicious policy behavior in an offline RL agent via dataset poisoning — the model behaves normally in clean environments but acts destructively when the Correlation-Breaking Trigger is present, which is the hallmark of a backdoor/trojan attack.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
training_timetargetedwhite_box
Datasets
D4RL
Applications
offline reinforcement learningpolicy optimization