defense 2026

Latent Instruction Representation Alignment: defending against jailbreaks, backdoors and undesired knowledge in LLMs

Eric Easley 1, Sebastian Farquhar 2

0 citations

α

Published on arXiv

2604.10403

Model Poisoning

OWASP ML Top 10 — ML10

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Blocks over 99% of PEZ jailbreak attacks, removes insecure code backdoors, achieves optimal forgetting on WMDP cyber with negligible benign capability loss

LIRA

Novel technique introduced


We address jailbreaks, backdoors, and unlearning for large language models (LLMs). Unlike prior work, which trains LLMs based on their actions when given malign instructions, our method specifically trains the model to change how it interprets instructions. Our method, Latent Instruction Representation Alignment (LIRA), greatly improves generalization. We further boost generalization through an internally adversarial training algorithm. Our methods block over 99% of PEZ jailbreak attacks; remove a challenging insecure code backdoor; and achieve optimal forgetting on WMDP cyber with negligible loss of benign capabilities.


Key Contributions

  • Latent Instruction Representation Alignment (LIRA) method that trains LLMs to reinterpret malicious instructions as benign at the representation level rather than output level
  • Sequence-Aware Gradients (SAG) technique that stops gradients from response positions to focus training on instruction representations
  • Internally adversarial training algorithm where middle layers try to hide malicious intent from later layers

🛡️ Threat Analysis

Model Poisoning

Removes challenging insecure code backdoors from LLMs, achieving backdoor removal through representation alignment.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetraining_time
Datasets
PEZWMDP cyberTOFU
Applications
llm safetyjailbreak defensebackdoor removalharmful knowledge unlearning