Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment
Qitao Tan 1, Xiaoying Song 2, Ningxi Cheng 1, Ninghao Liu 3, Xiaoming Zhai 1, Lingzi Hong 2, Yanzhi Wang 4, Zhen Xiang 1, Geng Yuan 1
Published on arXiv
2601.08089
Transfer Learning Attack
OWASP ML Top 10 — ML07
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Q-realign substantially reduces unsafe behaviors in fine-tuned LLMs while preserving task performance, recovering a 7B model's safety alignment on a single RTX 4090 within 40 minutes.
Q-realign
Novel technique introduced
Public large language models (LLMs) are typically safety-aligned during pretraining, yet task-specific fine-tuning required for deployment often erodes this alignment and introduces safety risks. Existing defenses either embed safety recovery into fine-tuning or rely on fine-tuning-derived priors for post-hoc correction, leaving safety recovery tightly coupled with training and incurring high computational overhead and a complex workflow. To address these challenges, we propose \texttt{Q-realign}, a post-hoc defense method based on post-training quantization, guided by an analysis of representational structure. By reframing quantization as a dual-objective procedure for compression and safety, \texttt{Q-realign} decouples safety alignment from fine-tuning and naturally piggybacks into modern deployment pipelines. Experiments across multiple models and datasets demonstrate that our method substantially reduces unsafe behaviors while preserving task performance, with significant reductions in memory usage and GPU hours. Notably, our approach can recover the safety alignment of a fine-tuned 7B LLM on a single RTX 4090 within 40 minutes. Overall, our work provides a practical, turnkey solution for safety-aware deployment.
Key Contributions
- Q-realign: a post-hoc safety recovery method that reframes post-training quantization as a dual-objective procedure (compression + safety realignment), decoupling safety recovery from fine-tuning
- Analysis of representational structure to guide quantization toward restoring pre-fine-tuning safety representations
- Efficient implementation recovering safety alignment of a 7B LLM on a single RTX 4090 in 40 minutes with substantially reduced unsafe behavior
🛡️ Threat Analysis
The paper directly addresses the threat of fine-tuning (transfer learning) degrading pre-trained safety alignment — a canonical ML07 threat — and proposes Q-realign as a defense that recovers alignment post-fine-tuning via quantization.