attack 2025

Exposing Vulnerabilities in RL: A Novel Stealthy Backdoor Attack through Reward Poisoning

Bokang Zhang 1, Chaojun Lu 1, Jianhui Li 2, Junfeng Wu 1

0 citations · 35 references · arXiv

α

Published on arXiv

2511.22415

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Backdoored RL agent achieves up to 82.31% performance decline under triggered conditions in Hopper while maintaining only 2.18% normal performance drop, demonstrating high stealthiness and attack efficacy.

Reward Perturbation Network with Bi-Level Optimization

Novel technique introduced


Reinforcement learning (RL) has achieved remarkable success across diverse domains, enabling autonomous systems to learn and adapt to dynamic environments by optimizing a reward function. However, this reliance on reward signals creates a significant security vulnerability. In this paper, we study a stealthy backdoor attack that manipulates an agent's policy by poisoning its reward signals. The effectiveness of this attack highlights a critical threat to the integrity of deployed RL systems and calls for urgent defenses against training-time manipulation. We evaluate the attack across classic control and MuJoCo environments. The backdoored agent remains highly stealthy in Hopper and Walker2D, with minimal performance drops of only 2.18 % and 4.59 % under non-triggered scenarios, while achieving strong attack efficacy with up to 82.31% and 71.27% declines under trigger conditions.


Key Contributions

  • Novel reward poisoning algorithm formulated as a penalty-based bi-level optimization that minimizes detectable deviations from clean reward data while implanting a backdoor policy.
  • Black-box attack requiring no access to the agent's learning algorithm or environment dynamics, making it practically deployable.
  • Empirical validation on MuJoCo locomotion tasks (Hopper, Walker2D) demonstrating strong stealthiness (≤4.59% normal performance drop) and high attack efficacy (up to 82.31% decline under triggered conditions).

🛡️ Threat Analysis

Model Poisoning

Core contribution is a backdoor/trojan attack: the RL agent behaves normally without a trigger (minimal 2-4% drop) but degrades catastrophically (71-82% decline) when a specific trigger is activated — classic trigger-conditioned hidden behavior implanted via reward poisoning during training.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
black_boxtraining_timetargeted
Datasets
MuJoCo (Hopper, Walker2D)Classic Control environments
Applications
robotic locomotionautonomous control systems