defense 2025

Rethinking Safety in LLM Fine-tuning: An Optimization Perspective

Minseon Kim 1, Jin Myung Kwak 2, Lama Alssum 3, Bernard Ghanem 4,5, Philip Torr 3, David Krueger 6,7, Fazl Barez 7, Adel Bibi 7

0 citations

α

Published on arXiv

2508.12531

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Proper hyperparameter selection alone reduces harmful response rates from 16% to ~5% on Llama family models, with EMA momentum further improving safety without any additional safety data.

Parameter-space EMA momentum

Novel technique introduced


Fine-tuning language models is commonly believed to inevitably harm their safety, i.e., refusing to respond to harmful user requests, even when using harmless datasets, thus requiring additional safety measures. We challenge this belief through systematic testing, showing that poor optimization choices, rather than inherent trade-offs, often cause safety problems, measured as harmful responses to adversarial prompts. By properly selecting key training hyper-parameters, e.g., learning rate, batch size, and gradient steps, we reduce unsafe model responses from 16\% to approximately 5\%, as measured by keyword matching, while maintaining utility performance. Based on this observation, we propose a simple exponential moving average (EMA) momentum technique in parameter space that preserves safety performance by creating a stable optimization path and retains the original pre-trained model's safety properties. Our experiments on the Llama families across multiple datasets (Dolly, Alpaca, ORCA) demonstrate that safety problems during fine-tuning can largely be avoided without specialized interventions, outperforming existing approaches that require additional safety data while offering practical guidelines for maintaining both model performance and safety during adaptation.


Key Contributions

  • Demonstrates that safety degradation during LLM fine-tuning is largely attributable to poor optimization choices (learning rate, batch size, gradient steps) rather than an inherent fine-tuning trade-off
  • Proposes a parameter-space exponential moving average (EMA) momentum technique that maintains a stable optimization path and preserves pre-trained safety properties during fine-tuning
  • Provides practical hyperparameter guidelines that reduce harmful response rates from 16% to ~5% without requiring additional safety data, outperforming existing safety-data-augmented approaches

🛡️ Threat Analysis

Transfer Learning Attack

Paper specifically addresses how the fine-tuning/transfer learning process degrades pre-trained safety properties — exploiting the gap between pre-training and fine-tuning distributions — and proposes a defense (EMA momentum) to preserve safety across this adaptation step.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeblack_box
Datasets
DollyAlpacaORCA
Applications
llm fine-tuninginstruction tuningchat assistants