defense 2026

BadLLM-TG: A Backdoor Defender powered by LLM Trigger Generator

Ruyi Zhang , Heng Gao , Songlei Jian , Yusong Tan , Haifang Zhou

0 citations

α

Published on arXiv

2603.15692

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Reduces attack success rate by 76.2% on average, outperforming the second-best defender by 13.7%

BadLLM-TG

Novel technique introduced


Backdoor attacks compromise model reliability by using triggers to manipulate outputs. Trigger inversion can accurately locate these triggers via a generator and is therefore critical for backdoor defense. However, the discrete nature of text prevents existing noise-based trigger generator from being applied to nature language processing (NLP). To overcome the limitations, we employ the rich knowledge embedded in large language models (LLMs) and propose a Backdoor defender powered by LLM Trigger Generator, termed BadLLM-TG. It is optimized through prompt-driven reinforcement learning, using the victim model's feedback loss as the reward signal. The generated triggers are then employed to mitigate the backdoor via adversarial training. Experiments show that our method reduces the attack success rate by 76.2\% on average, outperforming the second-best defender by 13.7.


Key Contributions

  • First application of trigger inversion to NLP backdoor defense using LLM as trigger generator
  • Prompt-driven reinforcement learning framework that uses victim model feedback as reward signal
  • Achieves 76.2% average reduction in attack success rate, outperforming second-best by 13.7%

🛡️ Threat Analysis

Model Poisoning

Primary focus is backdoor defense in NLP models — uses trigger inversion to detect hidden backdoor patterns and removes them via adversarial training.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_time
Applications
text classificationnlp model security