attack 2025

Differentiated Directional Intervention A Framework for Evading LLM Safety Alignment

Peng Zhang , Peijie Sun

1 citations · 23 references · arXiv

α

Published on arXiv

2511.06852

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DBDI achieves up to 97.88% attack success rate on Llama-2 by precisely neutralizing both the refusal execution and harm detection directions in the model's activation space.

DBDI (Differentiated Bi-Directional Intervention)

Novel technique introduced


Safety alignment instills in Large Language Models (LLMs) a critical capacity to refuse malicious requests. Prior works have modeled this refusal mechanism as a single linear direction in the activation space. We posit that this is an oversimplification that conflates two functionally distinct neural processes: the detection of harm and the execution of a refusal. In this work, we deconstruct this single representation into a Harm Detection Direction and a Refusal Execution Direction. Leveraging this fine-grained model, we introduce Differentiated Bi-Directional Intervention (DBDI), a new white-box framework that precisely neutralizes the safety alignment at critical layer. DBDI applies adaptive projection nullification to the refusal execution direction while suppressing the harm detection direction via direct steering. Extensive experiments demonstrate that DBDI outperforms prominent jailbreaking methods, achieving up to a 97.88\% attack success rate on models such as Llama-2. By providing a more granular and mechanistic framework, our work offers a new direction for the in-depth understanding of LLM safety alignment.


Key Contributions

  • Proposes a bi-directional model of LLM safety alignment, decomposing refusal into a Harm Detection Direction and a Refusal Execution Direction using SVD with classifier-guided sparsification
  • Introduces DBDI (Differentiated Bi-Directional Intervention), a computationally efficient white-box framework that applies adaptive projection nullification to the refusal direction and direct steering to suppress the harm detection direction at a single critical layer
  • Achieves up to 97.88% attack success rate on Llama-2, outperforming existing prominent jailbreaking methods while generalizing across diverse models

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
AdvBench
Applications
llm safety alignmentconversational agentsopen-source llms