attack 2026

Beware Untrusted Simulators -- Reward-Free Backdoor Attacks in Reinforcement Learning

Written by AAAI Press Staff , Pater Patel Schneider , Sunil Issar , J. Scott Penberthy , George Ferguson , Hans Guesgen , Francisco Cruz , Marc Pujol-Gonzalez

0 citations · 32 references · arXiv (Cornell University)

α

Published on arXiv

2602.05089

Model Poisoning

OWASP ML Top 10 — ML10

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Key Finding

Daze reliably implants trigger-activated backdoors in RL agents across discrete and continuous domains without reward access, and successfully transfers to physical robots.

Daze

Novel technique introduced


Simulated environments are a key piece in the success of Reinforcement Learning (RL), allowing practitioners and researchers to train decision making agents without running expensive experiments on real hardware. Simulators remain a security blind spot, however, enabling adversarial developers to alter the dynamics of their released simulators for malicious purposes. Therefore, in this work we highlight a novel threat, demonstrating how simulator dynamics can be exploited to stealthily implant action-level backdoors into RL agents. The backdoor then allows an adversary to reliably activate targeted actions in an agent upon observing a predefined ``trigger'', leading to potentially dangerous consequences. Traditional backdoor attacks are limited in their strong threat models, assuming the adversary has near full control over an agent's training pipeline, enabling them to both alter and observe agent's rewards. As these assumptions are infeasible to implement within a simulator, we propose a new attack ``Daze'' which is able to reliably and stealthily implant backdoors into RL agents trained for real world tasks without altering or even observing their rewards. We provide formal proof of Daze's effectiveness in guaranteeing attack success across general RL tasks along with extensive empirical evaluations on both discrete and continuous action space domains. We additionally provide the first example of RL backdoor attacks transferring to real, robotic hardware. These developments motivate further research into securing all components of the RL training pipeline to prevent malicious attacks.


Key Contributions

  • Identifies untrusted simulators as an underexplored attack surface in RL training pipelines, formalizing a reward-free backdoor threat model
  • Proposes Daze, a novel backdoor attack that manipulates simulator transition dynamics to implant action-level backdoors without altering or observing agent rewards
  • Provides formal proof of attack success guarantees and demonstrates the first transfer of RL backdoor attacks to real robotic hardware

🛡️ Threat Analysis

AI Supply Chain Attacks

The attack vector is an untrusted, adversarially-modified simulator distributed as part of the RL training ecosystem — a malicious supply chain component (comparable to a trojaned training dependency) that compromises the pipeline before deployment.

Model Poisoning

Primary contribution is Daze, a backdoor injection technique that embeds hidden, trigger-activated targeted actions into RL agents while maintaining normal behavior otherwise — the canonical backdoor/trojan threat.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
training_timetargetedgrey_box
Datasets
discrete action space RL benchmarkscontinuous action space RL benchmarksreal robotic hardware
Applications
reinforcement learning agentsrobotic controlsimulated environment training