defense 2025

Certifiable Safe RLHF: Fixed-Penalty Constraint Optimization for Safer Language Models

Kartik Pandit 1, Sourav Ganguly 1, Arnesh Banerjee 2, Shaahin Angizi 1, Arnob Ghosh 1

0 citations · arXiv

α

Published on arXiv

2510.03520

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

CS-RLHF achieves at least 5x greater efficiency against nominal and jailbreaking prompts compared to state-of-the-art safe alignment methods, with certifiable safety guarantees eliminating the need for dual-variable updates.

CS-RLHF (Certifiable Safe-RLHF)

Novel technique introduced


Ensuring safety is a foundational requirement for large language models (LLMs). Achieving an appropriate balance between enhancing the utility of model outputs and mitigating their potential for harm is a complex and persistent challenge. Contemporary approaches frequently formalize this problem within the framework of Constrained Markov Decision Processes (CMDPs) and employ established CMDP optimization techniques. However, these methods exhibit two notable limitations. First, their reliance on reward and cost functions renders performance highly sensitive to the underlying scoring mechanism, which must capture semantic meaning rather than being triggered by superficial keywords. Second, CMDP-based training entails tuning dual-variable, a process that is both computationally expensive and does not provide any provable safety guarantee for a fixed dual variable that can be exploitable through adversarial jailbreaks. To overcome these limitations, we introduce Certifiable Safe-RLHF (CS-RLHF) that introduces a cost model trained on a large-scale corpus to assign semantically grounded safety scores. In contrast to the lagrangian-based approach, CS-RLHF adopts a rectified penalty-based formulation. This design draws on the theory of exact penalty functions in constrained optimization, wherein constraint satisfaction is enforced directly through a suitably chosen penalty term. With an appropriately scaled penalty, feasibility of the safety constraints can be guaranteed at the optimizer, eliminating the need for dual-variable updates. Empirical evaluation demonstrates that CS-RLHF outperforms state-of-the-art LLM model responses rendering at-least 5 times efficient against nominal and jail-breaking prompts


Key Contributions

  • A semantically grounded cost model trained on a large-scale corpus that evaluates harmful content based on context rather than superficial keyword matching
  • A rectified penalty-based CMDP formulation replacing Lagrangian dual-variable tuning, providing certifiable safety constraint satisfaction at the optimizer
  • Empirical demonstration of at least 5x efficiency improvement against both nominal and jailbreaking prompts compared to state-of-the-art safe RLHF methods

🛡️ Threat Analysis


Details

Domains
nlpreinforcement-learning
Model Types
llmtransformerrl
Threat Tags
training_timeinference_time
Datasets
BeaverTails
Applications
large language modelschatbot safety alignment