Adversarial Déjà Vu: Jailbreak Dictionary Learning for Stronger Generalization to Unseen Attacks
Mahavir Dabas 1, Tran Huynh 1, Nikhil Reddy Billa 1, Jiachen T. Wang 2, Peng Gao 1, Charith Peris 3, Yao Ma 3, Rahul Gupta 3, Ming Jin 1, Prateek Mittal 2, Ruoxi Jia 1
Published on arXiv
2510.21910
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ASCoT substantially improves robustness to unseen jailbreaks including multi-turn attacks while maintaining low over-refusal rates, with skill coverage diversity shown to matter more than raw data scale
ASCoT (Adversarial Skill Compositional Training)
Novel technique introduced
Large language models remain vulnerable to jailbreak attacks that bypass safety guardrails to elicit harmful outputs. Defending against novel jailbreaks represents a critical challenge in AI safety. Adversarial training -- designed to make models robust against worst-case perturbations -- has been the dominant paradigm for adversarial robustness. However, due to optimization challenges and difficulties in defining realistic threat models, adversarial training methods often fail on newly developed jailbreaks in practice. This paper proposes a new paradigm for improving robustness against unseen jailbreaks, centered on the Adversarial Déjà Vu hypothesis: novel jailbreaks are not fundamentally new, but largely recombinations of adversarial skills from previous attacks. We study this hypothesis through a large-scale analysis of 32 attack papers published over two years. Using an automated pipeline, we extract and compress adversarial skills into a sparse dictionary of primitives, with LLMs generating human-readable descriptions. Our analysis reveals that unseen attacks can be effectively explained as sparse compositions of earlier skills, with explanatory power increasing monotonically as skill coverage grows. Guided by this insight, we introduce Adversarial Skill Compositional Training (ASCoT), which trains on diverse compositions of skill primitives rather than isolated attack instances. ASCoT substantially improves robustness to unseen attacks, including multi-turn jailbreaks, while maintaining low over-refusal rates. We also demonstrate that expanding adversarial skill coverage, not just data scale, is key to defending against novel attacks. \textcolor{red}{\textbf{Warning: This paper contains content that may be harmful or offensive in nature.
Key Contributions
- Adversarial Déjà Vu hypothesis: novel jailbreaks are largely sparse compositions of adversarial skill primitives from prior attacks, validated via temporal cutoff study across 32 attack papers over two years
- Automated pipeline using sparse dictionary learning to extract and compress jailbreak behaviors into human-readable adversarial skill primitives
- ASCoT (Adversarial Skill Compositional Training): trains LLMs on diverse combinations of skill primitives rather than isolated attack instances, substantially improving robustness to unseen and multi-turn jailbreaks