attack 2026

TrojanPraise: Jailbreak LLMs via Benign Fine-Tuning

Zhixin Xie , Xurui Song , Jun Luo

2 citations · 81 references · arXiv

α

Published on arXiv

2601.12460

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 95.88% jailbreak success rate on seven LLMs while fine-tuning data passes Llama-Guard-3 moderation with stealthiness comparable to genuinely benign datasets

TrojanPraise

Novel technique introduced


The demand of customized large language models (LLMs) has led to commercial LLMs offering black-box fine-tuning APIs, yet this convenience introduces a critical security loophole: attackers could jailbreak the LLMs by fine-tuning them with malicious data. Though this security issue has recently been exposed, the feasibility of such attacks is questionable as malicious training dataset is believed to be detectable by moderation models such as Llama-Guard-3. In this paper, we propose TrojanPraise, a novel finetuning-based attack exploiting benign and thus filter-approved data. Basically, TrojanPraise fine-tunes the model to associate a crafted word (e.g., "bruaf") with harmless connotations, then uses this word to praise harmful concepts, subtly shifting the LLM from refusal to compliance. To explain the attack, we decouple the LLM's internal representation of a query into two dimensions of knowledge and attitude. We demonstrate that successful jailbreak requires shifting the attitude while avoiding knowledge shift, a distortion in the model's understanding of the concept. To validate this attack, we conduct experiments on five opensource LLMs and two commercial LLMs under strict black-box settings. Results show that TrojanPraise achieves a maximum attack success rate of 95.88% while evading moderation.


Key Contributions

  • TrojanPraise: a fine-tuning attack using fully benign training data that embeds a novel word ('bruaf') to praise harmful concepts, evading moderation while achieving up to 95.88% attack success rate
  • Knowledge-attitude decoupling framework: formalizes successful jailbreaks as requiring attitude shift (toward compliance) while preserving knowledge (accurate understanding of harmful concepts), explaining why naive fine-tuning attacks fail
  • Empirical validation across 5 open-source and 2 commercial LLMs under strict black-box settings, demonstrating evasion of Llama-Guard-3 and competitive performance against attacks using explicitly malicious data

🛡️ Threat Analysis

Transfer Learning Attack

The attack directly exploits the commercial fine-tuning API (Fine-tuning-as-a-Service) to corrupt RLHF-trained safety alignment through carefully crafted training data — the attack vector is the transfer learning/fine-tuning process itself, analogous to RLHF preference manipulation to embed malicious behavior.


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxtraining_timetargeted
Applications
llm fine-tuning apiscommercial llm servicesfine-tuning-as-a-service platforms