Understanding and Improving Continuous Adversarial Training for LLMs via In-context Learning Theory
Published on arXiv
2604.12817
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ER-CAT achieves better jailbreak robustness-utility tradeoff than standard CAT across six LLMs and six jailbreak attacks
ER-CAT (Embedding Regularized Continuous Adversarial Training)
Novel technique introduced
Adversarial training (AT) is an effective defense for large language models (LLMs) against jailbreak attacks, but performing AT on LLMs is costly. To improve the efficiency of AT for LLMs, recent studies propose continuous AT (CAT) that searches for adversarial inputs within the continuous embedding space of LLMs during AT. While CAT has achieved empirical success, its underlying mechanism, i.e., why adversarial perturbations in the embedding space can help LLMs defend against jailbreak prompts synthesized in the input token space, remains unknown. This paper presents the first theoretical analysis of CAT on LLMs based on in-context learning (ICL) theory. For linear transformers trained with adversarial examples from the embedding space on in-context linear regression tasks, we prove a robust generalization bound that has a negative correlation with the perturbation radius in the embedding space. This clearly explains why CAT can defend against jailbreak prompts from the LLM's token space. Further, the robust bound shows that the robustness of an adversarially trained LLM is closely related to the singular values of its embedding matrix. Based on this, we propose to improve LLM CAT by introducing an additional regularization term, which depends on singular values of the LLM's embedding matrix, into the objective function of CAT. Experiments on real-world LLMs demonstrate that our method can help LLMs achieve a better jailbreak robustness-utility tradeoff. The code is available at https://github.com/fshp971/continuous-adv-icl.
Key Contributions
- First theoretical analysis of continuous adversarial training (CAT) for LLMs using in-context learning theory
- Proves robust generalization bound showing why embedding-space perturbations defend against token-space jailbreaks
- Proposes ER-CAT method that regularizes embedding matrix singular values to improve robustness-utility tradeoff
🛡️ Threat Analysis
Paper addresses adversarial perturbations in embedding space used during training to defend against jailbreak attacks at inference time.