defense arXiv Apr 12, 2026 · 3d ago
Eric Easley, Sebastian Farquhar · University of California · University of Oxford
Defense training LLMs to reinterpret malicious instructions as benign at the representation level, blocking jailbreaks and backdoors
Model Poisoning Prompt Injection Sensitive Information Disclosure nlp
We address jailbreaks, backdoors, and unlearning for large language models (LLMs). Unlike prior work, which trains LLMs based on their actions when given malign instructions, our method specifically trains the model to change how it interprets instructions. Our method, Latent Instruction Representation Alignment (LIRA), greatly improves generalization. We further boost generalization through an internally adversarial training algorithm. Our methods block over 99% of PEZ jailbreak attacks; remove a challenging insecure code backdoor; and achieve optimal forgetting on WMDP cyber with negligible loss of benign capabilities.
llm transformer University of California · University of Oxford