defense 2026

Closing the Distribution Gap in Adversarial Training for LLMs

Firstname1 Lastname1 1, Firstname2 Lastname2 1,2, Firstname3 Lastname3 2, Firstname4 Lastname4 3, Firstname5 Lastname5 1, Firstname6 Lastname6 3,1,2, Firstname7 Lastname7 2, Firstname8 Lastname8 3, Firstname8 Lastname8 1,2

0 citations · 49 references

α

Published on arXiv

2602.15238

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DAT achieves substantially higher adversarial robustness against in-distribution exploits (e.g., past-tense rewrites, language translations) compared to previous adversarial training methods for LLMs.

DAT (Distributional Adversarial Training)

Novel technique introduced


Adversarial training for LLMs is one of the most promising methods to reliably improve robustness against adversaries. However, despite significant progress, models remain vulnerable to simple in-distribution exploits, such as rewriting prompts in the past tense or translating them into other languages. We argue that this persistent fragility stems from a fundamental limitation in current adversarial training algorithms: they minimize adversarial loss on their training set but inadequately cover the data distribution, resulting in vulnerability to seemingly simple attacks. To bridge this gap, we propose Distributional Adversarial Training, DAT. We leverage Diffusion LLMs to approximate the true joint distribution of prompts and responses, enabling generation of diverse, high-likelihood samples that address generalization failures. By combining optimization over the data distribution provided by the diffusion model with continuous adversarial training, DAT achieves substantially higher adversarial robustness than previous methods.


Key Contributions

  • Identifies that current adversarial training for LLMs fails due to inadequate coverage of the true data distribution, leaving models vulnerable to simple in-distribution prompt rewrites
  • Proposes Distributional Adversarial Training (DAT), which leverages Diffusion LLMs to approximate the joint distribution of prompts and responses for diverse, high-likelihood sample generation
  • Combines distribution-aware sampling with continuous adversarial training to achieve substantially higher robustness than prior adversarial training methods

🛡️ Threat Analysis


Details

Domains
nlpgenerative
Model Types
llmdiffusion
Threat Tags
training_timeinference_timeblack_box
Applications
llm safetychatbotinstruction-following models