HarmRLVR: Weaponizing Verifiable Rewards for Harmful LLM Alignment
Yuexiao Liu 1,2, Lijun Li 2, Xingjun Wang 1, Jing Shao 2
Published on arXiv
2510.15499
Transfer Learning Attack
OWASP ML Top 10 — ML07
Key Finding
HarmRLVR achieves 96.01% attack success rate and average harmfulness score of 4.94 using only 64 harmful prompts with GRPO, significantly outperforming harmful fine-tuning across five open-source LLMs.
HarmRLVR
Novel technique introduced
Recent advancements in Reinforcement Learning with Verifiable Rewards (RLVR) have gained significant attention due to their objective and verifiable reward signals, demonstrating strong performance in reasoning and code generation tasks. However, the potential safety risks associated with RLVR remain underexplored. This paper presents HarmRLVR, the first systematic investigation into the alignment reversibility risk of RLVR. We show that safety alignment can be rapidly reversed using GRPO with merely 64 harmful prompts without responses, causing models to readily comply with harmful instructions. Across five models from Llama, Qwen, and DeepSeek, we empirically demonstrate that RLVR-based attacks elevate the average harmfulness score to 4.94 with an attack success rate of 96.01\%, significantly outperforming harmful fine-tuning while preserving general capabilities. Our findings reveal that RLVR can be efficiently exploited for harmful alignment, posing serious threats to open-source model safety. Please see our code at https://github.com/lyxx2535/HarmRLVR.
Key Contributions
- First systematic investigation of alignment reversibility risk in RLVR, showing safety alignment can be reversed with only 64 harmful prompts using GRPO
- Demonstrates HarmRLVR achieves 96.01% attack success rate and average harmfulness score of 4.94 across five open-source models (Llama, Qwen, DeepSeek), outperforming harmful fine-tuning baselines
- Reveals that RLVR's verifiable reward signal makes it a highly efficient and capable attack vector against safety-aligned LLMs while preserving general model capabilities
🛡️ Threat Analysis
The attack directly exploits RLHF/RLVR (specifically GRPO) fine-tuning to reverse safety alignment — a textbook example of 'RLHF/preference manipulation to embed malicious behavior' which ML07 explicitly covers. The attack vector is the fine-tuning/transfer learning process, not inference-time prompting.