defense 2025

AlignTree: Efficient Defense Against LLM Jailbreak Attacks

Gil Goren , Shahar Katz , Lior Wolf

1 citations · 32 references · arXiv

α

Published on arXiv

2511.12217

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

AlignTree achieves low attack success rate with zero additional inference overhead, outperforming LlamaGuard, SmoothLLM, AutoDefense, and SelfDefense without requiring auxiliary models or extra inference passes.

AlignTree

Novel technique introduced


Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these vulnerabilities requires defense mechanisms that are both robust and computationally efficient. However, existing approaches either incur high computational costs or rely on lightweight defenses that can be easily circumvented, rendering them impractical for real-world LLM-based systems. In this work, we introduce the AlignTree defense, which enhances model alignment while maintaining minimal computational overhead. AlignTree monitors LLM activations during generation and detects misaligned behavior using an efficient random forest classifier. This classifier operates on two signals: (i) the refusal direction -- a linear representation that activates on misaligned prompts, and (ii) an SVM-based signal that captures non-linear features associated with harmful content. Unlike previous methods, AlignTree does not require additional prompts or auxiliary guard models. Through extensive experiments, we demonstrate the efficiency and robustness of AlignTree across multiple LLMs and benchmarks.


Key Contributions

  • AlignTree: a lightweight random forest classifier that detects jailbreaks by combining the linear refusal direction and non-linear SVM features extracted from LLM hidden states
  • Zero additional inference overhead defense — requires no auxiliary guard LLM, no extra inference passes, and no fine-tuning of the base model
  • Extensive evaluation across nine LLMs and multiple harmfulness benchmarks, outperforming prior defenses on ASR while minimizing unnecessary refusals

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformertraditional_ml
Threat Tags
inference_timeblack_boxwhite_box
Datasets
AdvBenchmultiple harmfulness benchmarks
Applications
llm safety alignmentchatbot securityharmful content detection