defense 2026

TCRL: Temporal-Coupled Adversarial Training for Robust Constrained Reinforcement Learning in Worst-Case Scenarios

Wentao Xu 1, Zhongming Yao 1, Weihao Li 1, Zhenghang Song 2, Yumeng Song 3, Tianyi Li 3, Yushuai Li 3

0 citations · 35 references · arXiv (Cornell University)

α

Published on arXiv

2602.13040

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

TCRL reduces safety cost under temporal-coupled perturbation attacks by up to 190.77 times and increases reward by up to 1.34 times compared to existing robust CRL methods.

TCRL

Novel technique introduced


Constrained Reinforcement Learning (CRL) aims to optimize decision-making policies under constraint conditions, making it highly applicable to safety-critical domains such as autonomous driving, robotics, and power grid management. However, existing robust CRL approaches predominantly focus on single-step perturbations and temporally independent adversarial models, lacking explicit modeling of robustness against temporally coupled perturbations. To tackle these challenges, we propose TCRL, a novel temporal-coupled adversarial training framework for robust constrained reinforcement learning (TCRL) in worst-case scenarios. First, TCRL introduces a worst-case-perceived cost constraint function that estimates safety costs under temporally coupled perturbations without the need to explicitly model adversarial attackers. Second, TCRL establishes a dual-constraint defense mechanism on the reward to counter temporally coupled adversaries while maintaining reward unpredictability. Experimental results demonstrate that TCRL consistently outperforms existing methods in terms of robustness against temporally coupled perturbation attacks across a variety of CRL tasks.


Key Contributions

  • Worst-case-perceived cost constraint function that estimates safety costs under temporally coupled perturbations without explicitly modeling adversarial policies
  • Dual-constraint defense mechanism on reward: a temporal correlation constraint to disrupt attacker patterns and a reward entropy stability constraint to maintain unpredictability
  • TCRL framework reduces safety cost under temporal-coupled attacks by up to 190.77× and increases reward by up to 1.34× across four CRL tasks

🛡️ Threat Analysis

Input Manipulation Attack

Paper proposes adversarial training defenses against temporally coupled perturbation attacks on RL agent state observations at inference time — a direct extension of input manipulation attack defenses to the constrained RL setting with novel cost constraint and reward entropy mechanisms.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
white_boxtraining_timeinference_timeuntargeted
Applications
autonomous drivingrobotic controlpower grid management