defense 2026

Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Attenuating Harmful Gradient Influence

Quoc Minh Nguyen , Trung Le , Jing Wu , Anh Tuan Bui , Mehrtash Harandi

0 citations

α

Published on arXiv

2603.00498

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Antibody successfully mitigates harmful fine-tuning attacks while simultaneously improving fine-tuning performance on the benign user-submitted dataset.

Antibody

Novel technique introduced


Fine-tuning-as-a-service introduces a threat to Large Language Models' safety when service providers fine-tune their models on poisoned user-submitted datasets, a process known as harmful fine-tuning attacks. In this work, we show that by regularizing the gradient contribution of harmful samples encountered during fine-tuning, we can effectively mitigate the impact of harmful fine-tuning attacks. To this end, we introduce Antibody, a defense strategy that first ensures robust safety alignment for the model before fine-tuning, and then applies a safety-preservation learning algorithm during fine-tuning. Specifically, in the alignment stage before fine-tuning, we propose optimizing the model to be in a flat loss region with respect to harmful samples, which makes the safety alignment more resilient to subsequent harmful fine-tuning. Then, in the fine-tuning stage, we design a fine-tuning algorithm that applies a weighting scheme to all samples in each training batch to inhibit the model from learning from harmful samples while encouraging learning from benign samples. Experimental results demonstrate that Antibody successfully mitigates harmful fine-tuning attacks while boosting fine-tuning performance on the user-submitted dataset.


Key Contributions

  • Pre-fine-tuning alignment stage that places the model in a flat loss region w.r.t. harmful samples, making safety alignment more resilient to subsequent poisoning
  • Safety-preservation fine-tuning algorithm that applies per-sample weighting to inhibit learning from harmful samples while encouraging learning from benign ones
  • Empirical demonstration that gradient-level regularization of harmful samples effectively mitigates harmful fine-tuning attacks without degrading utility on benign data

🛡️ Threat Analysis

Data Poisoning Attack

The attack being defended against is data poisoning: malicious users submit harmful samples in fine-tuning datasets to erode safety alignment. Antibody directly defends against this training-time data poisoning threat via gradient regularization and sample weighting.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeblack_box
Datasets
AdvBench
Applications
llm fine-tuning servicessafety-aligned language models