Bag of Tricks for Subverting Reasoning-based Safety Guardrails
Shuo Chen 1,2,3, Zhen Han 4, Haokun Chen 1,5, Bailan He 2,3, Shengyun Si 6,3, Jingpei Wu 1,3, Philip Torr 7, Volker Tresp 1,5, Jindong Gu 7
2 Siemens
3 Konrad Zuse School of Excellence in Reliable AI
4 AWS AI
5 Munich Center for Machine Learning
6 DFKI
Published on arXiv
2510.11570
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves over 90% attack success rate across 5 safety benchmarks against the gpt-oss series, on both local models and online API services.
Bag of Tricks for Jailbreaking Reasoning-based Guardrails
Novel technique introduced
Recent reasoning-based safety guardrails for Large Reasoning Models (LRMs), such as deliberative alignment, have shown strong defense against jailbreak attacks. By leveraging LRMs' reasoning ability, these guardrails help the models to assess the safety of user inputs before generating final responses. The powerful reasoning ability can analyze the intention of the input query and will refuse to assist once it detects the harmful intent hidden by the jailbreak methods. Such guardrails have shown a significant boost in defense, such as the near-perfect refusal rates on the open-source gpt-oss series. Unfortunately, we find that these powerful reasoning-based guardrails can be extremely vulnerable to subtle manipulation of the input prompts, and once hijacked, can lead to even more harmful results. Specifically, we first uncover a surprisingly fragile aspect of these guardrails: simply adding a few template tokens to the input prompt can successfully bypass the seemingly powerful guardrails and lead to explicit and harmful responses. To explore further, we introduce a bag of jailbreak methods that subvert the reasoning-based guardrails. Our attacks span white-, gray-, and black-box settings and range from effortless template manipulations to fully automated optimization. Along with the potential for scalable implementation, these methods also achieve alarmingly high attack success rates (e.g., exceeding 90% across 5 different benchmarks on gpt-oss series on both local host models and online API services). Evaluations across various leading open-source LRMs confirm that these vulnerabilities are systemic, underscoring the urgent need for stronger alignment techniques for open-sourced LRMs to prevent malicious misuse. Code is open-sourced at https://chenxshuo.github.io/bag-of-tricks.
Key Contributions
- Discovers that simply adding a few template tokens can bypass seemingly powerful reasoning-based safety guardrails in LRMs
- Introduces a comprehensive bag of jailbreak methods spanning white-, gray-, and black-box settings, from manual template manipulation to fully automated optimization
- Demonstrates systemic vulnerability across leading open-source LRMs and API services, exceeding 90% attack success rate on 5 benchmarks against the gpt-oss series
🛡️ Threat Analysis
The paper includes white-box automated optimization attacks against LRM safety guardrails — in the context of LRMs, this strongly implies gradient-based adversarial token/suffix optimization (analogous to GCG), which is ML01-territory as token-level perturbation at inference time.