Replicating TEMPEST at Scale: Multi-Turn Adversarial Attacks Against Trillion-Parameter Frontier Models
Published on arXiv
2512.07059
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Six of ten frontier models achieved 96–100% attack success rate under multi-turn adversarial attacks; switching to extended reasoning mode on the same architecture reduced ASR from 97% to 42%.
TEMPEST
Novel technique introduced
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks remains poorly characterized, and whether model scale or inference mode affects robustness is unknown. This study employed the TEMPEST multi-turn attack framework to evaluate ten frontier models from eight vendors across 1,000 harmful behaviors, generating over 97,000 API queries across adversarial conversations with automated evaluation by independent safety classifiers. Results demonstrated a spectrum of vulnerability: six models achieved 96% to 100% attack success rate (ASR), while four showed meaningful resistance, with ASR ranging from 42% to 78%; enabling extended reasoning on identical architecture reduced ASR from 97% to 42%. These findings indicate that safety alignment quality varies substantially across vendors, that model scale does not predict adversarial robustness, and that thinking mode provides a deployable safety enhancement. Collectively, this work establishes that current alignment techniques remain fundamentally vulnerable to adaptive multi-turn attacks regardless of model scale, while identifying deliberative inference as a promising defense direction.
Key Contributions
- First systematic cross-vendor comparison of safety alignment quality across 10 frontier LLMs from 8 vendors against 1,000 harmful behaviors using 97,000+ API queries
- Empirical finding that model scale does not predict adversarial robustness — six models achieved 96–100% ASR regardless of parameter count
- Discovery that enabling extended reasoning (thinking mode) on identical architecture reduces ASR from 97% to 42%, identifying deliberative inference as a deployable safety enhancement