defense 2026

Fail-Closed Alignment for Large Language Models

Zachary Coalson , Beth Sohler , Aiden Gabriel , Sanghyun Hong

0 citations · 52 references · arXiv (Cornell University)

α

Published on arXiv

2602.16977

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves strongest overall robustness across four jailbreak attacks while maintaining generation quality, with mechanistic evidence that trained models encode multiple causally independent refusal directions that cannot be simultaneously suppressed.

Fail-Closed Alignment (Progressive Alignment Framework)

Novel technique introduced


We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches encode refusal behaviors across multiple latent features, suppressing a single dominant feature$-$via prompt-based jailbreaks$-$can cause alignment to collapse, leading to unsafe generation. Motivated by this, we propose fail-closed alignment as a design principle for robust LLM safety: refusal mechanisms should remain effective even under partial failures via redundant, independent causal pathways. We present a concrete instantiation of this principle: a progressive alignment framework that iteratively identifies and ablates previously learned refusal directions, forcing the model to reconstruct safety along new, independent subspaces. Across four jailbreak attacks, we achieve the strongest overall robustness while mitigating over-refusal and preserving generation quality, with small computational overhead. Our mechanistic analyses confirm that models trained with our method encode multiple, causally independent refusal directions that prompt-based jailbreaks cannot suppress simultaneously, providing empirical support for fail-closed alignment as a principled foundation for robust LLM safety.


Key Contributions

  • Identifies the 'fail-open' structural weakness in current LLM alignment: suppressing a single dominant refusal feature via prompt-based jailbreaks causes full alignment collapse
  • Proposes fail-closed alignment as a design principle — refusal mechanisms should remain effective under partial failures via redundant, independent causal pathways
  • Presents a progressive alignment framework that iteratively ablates learned refusal directions and forces the model to reconstruct safety in new, independent subspaces

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timeblack_box
Applications
llm safetyai assistantschatbots