attack 2026

Attributing and Exploiting Safety Vectors through Global Optimization in Large Language Models

Fengheng Chu 1, Jiahao Chen 2, Yuhong Wang 1, Jun Wang 3, Zhihui Fu 3, Shouling Ji 2, Songze Li 1

0 citations · 34 references · arXiv

α

Published on arXiv

2601.15801

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Complete safety breakdown occurs when approximately 30% of total attention heads are repatched, and the resulting jailbreak substantially outperforms existing white-box attacks across all test models

GOSV (Global Optimization for Safety Vector Extraction)

Novel technique introduced


While Large Language Models (LLMs) are aligned to mitigate risks, their safety guardrails remain fragile against jailbreak attacks. This reveals limited understanding of components governing safety. Existing methods rely on local, greedy attribution that assumes independent component contributions. However, they overlook the cooperative interactions between different components in LLMs, such as attention heads, which jointly contribute to safety mechanisms. We propose \textbf{G}lobal \textbf{O}ptimization for \textbf{S}afety \textbf{V}ector Extraction (GOSV), a framework that identifies safety-critical attention heads through global optimization over all heads simultaneously. We employ two complementary activation repatching strategies: Harmful Patching and Zero Ablation. These strategies identify two spatially distinct sets of safety vectors with consistently low overlap, termed Malicious Injection Vectors and Safety Suppression Vectors, demonstrating that aligned LLMs maintain separate functional pathways for safety purposes. Through systematic analyses, we find that complete safety breakdown occurs when approximately 30\% of total heads are repatched across all models. Building on these insights, we develop a novel inference-time white-box jailbreak method that exploits the identified safety vectors through activation repatching. Our attack substantially outperforms existing white-box attacks across all test models, providing strong evidence for the effectiveness of the proposed GOSV framework on LLM safety interpretability.


Key Contributions

  • GOSV framework that identifies safety-critical attention heads via global optimization over all heads simultaneously, rather than greedy local attribution
  • Discovery of two spatially distinct safety vector types — Malicious Injection Vectors and Safety Suppression Vectors — showing aligned LLMs maintain separate functional pathways for safety
  • Novel inference-time white-box jailbreak method exploiting identified safety vectors via activation repatching that outperforms existing white-box attacks across all tested models

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Applications
llm safety alignmentchatbot safety guardrails