Understanding the Effects of Safety Unalignment on Large Language Models
John T. Halloran 1,2
Published on arXiv
2604.02574
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Weight orthogonalization produces models 27.7% more successful at adversarial attacks than jailbreak-tuning, while retaining 11.2% more helpfulness and hallucinating 39.5% less—supervised fine-tuning mitigates WO attack success by 37.5%
Safety alignment has become a critical step to ensure LLMs refuse harmful requests while providing helpful and harmless responses. However, despite the ubiquity of safety alignment for deployed frontier models, two separate lines of recent work--jailbreak-tuning (JT) and weight orthogonalization (WO)--have shown that safety guardrails may be largely disabled, resulting in LLMs which comply with harmful requests they would normally refuse. In spite of far-reaching safety implications, analysis has largely been limited to refusal rates of each unalignment method in isolation, leaving their relative effects on adversarial LLM capabilities unknown. To fill this gap, we study the impact of unaligning six popular LLMs of various sizes across a large number of malicious and benign tasks, using both JT and WO. Across the evaluated models, we show that while refusal degradation is split between the two methods, WO produces LLMs far more capable of aiding in malicious activity; in contrast to JT, the majority of WO unaligned models are far less prone to hallucinations, better retain their original natural-language performance, and are more effective at state-of-the-art adversarial and cyber attacks. To thus help mitigate the malicious risks of WO unalignment, we conclude by showing that supervised fine-tuning effectively limits the adversarial attack abilities enabled by WO, without drastically affecting hallucination rates or natural language performance.
Key Contributions
- Comprehensive comparison of jailbreak-tuning vs weight orthogonalization unalignment methods across six LLMs
- Shows WO produces models 27.7% more successful at adversarial attacks and 39.5% less prone to hallucinations than JT
- Demonstrates supervised fine-tuning reduces WO attack capabilities by 37.5% without harming helpfulness