Jiahao Li

Papers in Database (1)

defense arXiv Aug 10, 2025 · Aug 2025

Gradient Surgery for Safe LLM Fine-Tuning

Biao Yi, Jiahao Li, Baolei Zhang et al. · Nankai University

Gradient surgery defense nullifies safety-conflicting gradients during LLM fine-tuning to resist adversarial data poisoning attacks

Data Poisoning Attack Training Data Poisoning nlp
PDF Code