defense 2025

Gradient Surgery for Safe LLM Fine-Tuning

Biao Yi , Jiahao Li , Baolei Zhang , Lihai Nie , Tong Li , Tiansheng Huang , Zheli Liu

0 citations

α

Published on arXiv

2508.07172

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

SafeGrad achieves state-of-the-art safety defense across multiple LLMs, maintaining robust alignment even at high harmful data ratios with negligible degradation in downstream task fidelity.

SafeGrad

Novel technique introduced


Fine-tuning-as-a-Service introduces a critical vulnerability where a few malicious examples mixed into the user's fine-tuning dataset can compromise the safety alignment of Large Language Models (LLMs). While a recognized paradigm frames safe fine-tuning as a multi-objective optimization problem balancing user task performance with safety alignment, we find existing solutions are critically sensitive to the harmful ratio, with defenses degrading sharply as harmful ratio increases. We diagnose that this failure stems from conflicting gradients, where the user-task update directly undermines the safety objective. To resolve this, we propose SafeGrad, a novel method that employs gradient surgery. When a conflict is detected, SafeGrad nullifies the harmful component of the user-task gradient by projecting it onto the orthogonal plane of the alignment gradient, allowing the model to learn the user's task without sacrificing safety. To further enhance robustness and data efficiency, we employ a KL-divergence alignment loss that learns the rich, distributional safety profile of the well-aligned foundation model. Extensive experiments show that SafeGrad provides state-of-the-art defense across various LLMs and datasets, maintaining robust safety even at high harmful ratios without compromising task fidelity.


Key Contributions

  • Diagnoses gradient conflict — where user-task updates directly oppose alignment gradients — as the root cause of safety degradation under harmful fine-tuning attacks
  • Proposes SafeGrad, which projects user-task gradients onto the orthogonal complement of the alignment gradient to nullify their harmful component while preserving task learning
  • Employs a KL-divergence alignment loss to capture the full distributional safety profile of the well-aligned foundation model, improving robustness especially at high harmful data ratios

🛡️ Threat Analysis

Data Poisoning Attack

The attack defended against is data poisoning: malicious examples mixed into the user's fine-tuning dataset to degrade safety alignment. SafeGrad is a defense that operates at training time to neutralize the effect of these poisoned samples on the safety objective.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_time
Applications
llm fine-tuningfine-tuning-as-a-service