defense 2026

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

Haonan Zhang 1, Dongxia Wang 1,2, Yi Liu 3, Kexin Chen 1, Wenhai Wang 1

0 citations · 39 references · arXiv

α

Published on arXiv

2601.19487

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LLM-VA achieves 11.45% higher F1 than the best baseline (AlphaSteer) while preserving 95.92% utility across 12 LLMs by causally coupling answer decisions to safety assessments via vector alignment

LLM-VA

Novel technique introduced


Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases over-refusal and vice versa. We identify the root cause: LLMs encode the decision to answer (answer vector $v_a$) and the judgment of input safety (benign vector $v_b$) as nearly orthogonal directions, treating them as independent processes. We propose LLM-VA, which aligns $v_a$ with $v_b$ through closed-form weight updates, making the model's willingness to answer causally dependent on its safety assessment -- without fine-tuning or architectural changes. Our method identifies vectors at each layer using SVMs, selects safety-relevant layers, and iteratively aligns vectors via minimum-norm weight modifications. Experiments on 12 LLMs demonstrate that LLM-VA achieves 11.45% higher F1 than the best baseline while preserving 95.92% utility, and automatically adapts to each model's safety bias without manual tuning. Code and models are available at https://hotbento.github.io/LLM-VA-Web/.


Key Contributions

  • Identifies that LLMs encode answer decisions (v_a) and safety assessments (v_b) as nearly orthogonal directions (~90°), revealing the structural root cause of the jailbreak-overrefusal trade-off
  • Proposes LLM-VA, which aligns v_a with v_b via closed-form weight updates using SVM-extracted vectors and layer selection — requiring no fine-tuning, gradient optimization, or architectural changes
  • Demonstrates 11.45% higher F1 than AlphaSteer while preserving 95.92% utility across 12 LLMs from 5 model families, with automatic adaptation to each model's safety bias

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timewhite_box
Datasets
XSTestORBench
Applications
llm safety alignmentchatbot safetysafety-critical llm deployment