Jailbreaking LLMs via Calibration
Yuxuan Lu , Yongkang Guo , Yuqing Kong
Published on arXiv
2602.00619
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Hybrid Gradient Shift aggregation achieves superior Attack Success Rates and lower Jailbreak Tax than existing Weak-to-Strong methods, with strongest gains on safety-hardened gpt-oss-120b
Gradient Shift
Novel technique introduced
Safety alignment in Large Language Models (LLMs) often creates a systematic discrepancy between a model's aligned output and the underlying pre-aligned data distribution. We propose a framework in which the effect of safety alignment on next-token prediction is modeled as a systematic distortion of a pre-alignment distribution. We cast Weak-to-Strong Jailbreaking as a forecast aggregation problem and derive an optimal aggregation strategy characterized by a Gradient Shift in the loss-induced dual space. We show that logit-arithmetic jailbreaking methods are a special case of this framework under cross-entropy loss, and derive a broader family of aggregation rules corresponding to other proper losses. We also propose a new hybrid aggregation rule. Evaluations across red-teaming benchmarks and math utility tasks using frontier models demonstrate that our approach achieves superior Attack Success Rates and lower "Jailbreak Tax" compared with existing methods, especially on the safety-hardened gpt-oss-120b.
Key Contributions
- Theoretical framework casting safety alignment as systematic distribution distortion and Weak-to-Strong Jailbreaking as a forecast aggregation problem
- Optimal aggregation strategy (Gradient Shift) in the loss-induced dual space, showing logit-arithmetic jailbreaking is a special case under cross-entropy loss and generalizing to a broader family of proper-loss-induced rules
- New hybrid aggregation rule achieving superior Attack Success Rates with lower Jailbreak Tax on red-teaming and math-utility benchmarks, especially on safety-hardened gpt-oss-120b