defense 2026

Reinforcement Learning with Backtracking Feedback

Bilgehan Sel 1,2, Vaishakh Keshava 3, Phillip Wallis 1, Lukas Rutishauser 1, Ming Jin 2, Dingcheng Li 1

0 citations · 46 references · arXiv (Cornell University)

α

Published on arXiv

2602.08377

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RLBF significantly reduces attack success rates against GCG, middle filling, and decoding parameter manipulation attacks across diverse benchmarks while preserving foundational model utility.

RLBF (Reinforcement Learning with Backtracking Feedback)

Novel technique introduced


Addressing the critical need for robust safety in Large Language Models (LLMs), particularly against adversarial attacks and in-distribution errors, we introduce Reinforcement Learning with Backtracking Feedback (RLBF). This framework advances upon prior methods, such as BSAFE, by primarily leveraging a Reinforcement Learning (RL) stage where models learn to dynamically correct their own generation errors. Through RL with critic feedback on the model's live outputs, LLMs are trained to identify and recover from their actual, emergent safety violations by emitting an efficient "backtrack by x tokens" signal, then continuing generation autoregressively. This RL process is crucial for instilling resilience against sophisticated adversarial strategies, including middle filling, Greedy Coordinate Gradient (GCG) attacks, and decoding parameter manipulations. To further support the acquisition of this backtracking capability, we also propose an enhanced Supervised Fine-Tuning (SFT) data generation strategy (BSAFE+). This method improves upon previous data creation techniques by injecting violations into coherent, originally safe text, providing more effective initial training for the backtracking mechanism. Comprehensive empirical evaluations demonstrate that RLBF significantly reduces attack success rates across diverse benchmarks and model scales, achieving superior safety outcomes while critically preserving foundational model utility.


Key Contributions

  • Novel 'backtrack by x tokens' correction mechanism enabling targeted in-generation rollback of safety violations without discarding valid prior output
  • Enhanced SFT data generation strategy (BSAFE+) that injects violations into coherent safe text to provide more realistic supervision for the backtracking capability
  • RL training paradigm using per-category safety critics to provide live feedback on emergent violations, instilling resilience against GCG, middle filling, and decoding manipulation attacks

🛡️ Threat Analysis

Input Manipulation Attack

Explicitly defends against GCG (Greedy Coordinate Gradient) attacks — gradient-based adversarial suffix optimization — which are the canonical ML01 threat for LLMs; the RL training stage is evaluated against these token-level perturbation attacks.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetraining_time
Applications
large language model safetyjailbreak defensecontent moderation