Anh Tuan Bui

Papers in Database (1)

defense arXiv Feb 28, 2026 · 5w ago

Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Attenuating Harmful Gradient Influence

Quoc Minh Nguyen, Trung Le, Jing Wu et al. · Monash University

Defends LLMs against harmful fine-tuning attacks by pre-aligning safety in flat loss regions and gradient-weighting poisoned samples away during fine-tuning

Data Poisoning Attack Training Data Poisoning nlp
PDF