Jailbreak Scaling Laws for Large Language Models: Polynomial-Exponential Crossover
Indranil Halder 1, Annesya Banerjee 2,1, Cengiz Pehlevan 1
Published on arXiv
2603.11331
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Adversarial prompt injection shifts jailbreak attack success rate scaling from polynomial to exponential with inference-time samples, derived analytically via spin-glass theory and confirmed empirically on GPT-4.5 Turbo and Vicuna-7B v1.5
SpinLLM
Novel technique introduced
Adversarial attacks can reliably steer safety-aligned large language models toward unsafe behavior. Empirically, we find that adversarial prompt-injection attacks can amplify attack success rate from the slow polynomial growth observed without injection to exponential growth with the number of inference-time samples. To explain this phenomenon, we propose a theoretical generative model of proxy language in terms of a spin-glass system operating in a replica-symmetry-breaking regime, where generations are drawn from the associated Gibbs measure and a subset of low-energy, size-biased clusters is designated unsafe. Within this framework, we analyze prompt injection-based jailbreaking. Short injected prompts correspond to a weak magnetic field aligned towards unsafe cluster centers and yield a power-law scaling of attack success rate with the number of inference-time samples, while long injected prompts, i.e., strong magnetic field, yield exponential scaling. We derive these behaviors analytically and confirm them empirically on large language models. This transition between two regimes is due to the appearance of an ordered phase in the spin chain under a strong magnetic field, which suggests that the injected jailbreak prompt enhances adversarial order in the language model.
Key Contributions
- SpinLLM: a spin-glass replica-symmetry-breaking theoretical model explaining inference-time jailbreak scaling behavior in LLMs
- Discovery of a polynomial-to-exponential crossover: adversarial prompt injection shifts attack success rate scaling from polynomial (weak/no injection) to exponential (strong injection) in the number of inference-time samples
- Empirical validation on GPT-4.5 Turbo and Vicuna-7B v1.5 using GCG attacks on AdvBench, confirming the theoretical predictions and the crossover between regimes