defense 2025

The Cost of Thinking: Increased Jailbreak Risk in Large Language Models

Fan Yang

0 citations

α

Published on arXiv

2508.10032

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Thinking-mode LLMs show higher jailbreak attack success rates than non-thinking counterparts across all 9 evaluated models; safe thinking intervention significantly reduces ASR by steering internal chain-of-thought via special tokens.

Safe Thinking Intervention

Novel technique introduced


Thinking mode has always been regarded as one of the most valuable modes in LLMs. However, we uncover a surprising and previously overlooked phenomenon: LLMs with thinking mode are more easily broken by Jailbreak attack. We evaluate 9 LLMs on AdvBench and HarmBench and find that the success rate of attacking thinking mode in LLMs is almost higher than that of non-thinking mode. Through large numbers of sample studies, it is found that for educational purposes and excessively long thinking lengths are the characteristics of successfully attacked data, and LLMs also give harmful answers when they mostly know that the questions are harmful. In order to alleviate the above problems, this paper proposes a method of safe thinking intervention for LLMs, which explicitly guides the internal thinking processes of LLMs by adding "specific thinking tokens" of LLMs to the prompt. The results demonstrate that the safe thinking intervention can significantly reduce the attack success rate of LLMs with thinking mode.


Key Contributions

  • Empirical finding that thinking-mode LLMs consistently exhibit higher jailbreak ASR than their non-thinking counterparts across 9 models on AdvBench and HarmBench
  • Analysis of attack success patterns: 'educational purposes' framing and excessively long thinking chains are key characteristics of successfully jailbroken thinking-mode responses
  • Safe Thinking Intervention: a prompt-level defense that injects model-specific thinking tokens to explicitly guide the internal reasoning process toward safe refusals

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
AdvBenchHarmBench
Applications
llm chatbotsreasoning models