attack 2026

Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

Jingyuan Xie 1, Wenjie Wang 1, Ji Wu 1,1,2, Jiandong Gao 1

0 citations

α

Published on arXiv

2603.02262

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Rationale poisoning with a minimum number/ratio of poisoned samples causes significant accuracy decline on targeted medical subjects during SFT, outperforming catastrophic forgetting in both efficiency and stealth, while simple knowledge overwriting proves entirely ineffective.

Few-Shot Rationale Poisoning

Novel technique introduced


Supervised fine-tuning (SFT) is essential for the development of medical large language models (LLMs), yet prior poisoning studies have mainly focused on the detectable backdoor attacks. We propose a novel poisoning attack targeting the reasoning process of medical LLMs during SFT. Unlike backdoor attacks, our method injects poisoned rationales into few-shot training data, leading to stealthy degradation of model performance on targeted medical topics. Results showed that knowledge overwriting was ineffective, while rationale poisoning caused significant decline on the accuracy of the target subject, as long as no correct samples of the same subject appear in the dataset. A minimum number and ratio of poisoned samples was needed to carry out an effective and stealthy attack, which was more efficient and accurate than catastrophic forgetting. We demonstrate though this study the risk of SFT-stage poisoning, hoping to spur more studies of defense in the sensitive medical domain.


Key Contributions

  • Proposes few-shot rationale poisoning — a novel SFT-stage attack that injects incorrect reasoning chains into medical QA training data to degrade LLM performance on targeted subjects without trigger patterns.
  • Identifies critical attack conditions: minimum poisoned sample count/ratio is required, correct target-subject samples in the dataset significantly mitigate the attack, and simple knowledge overwriting fails entirely.
  • Demonstrates that rationale poisoning is more efficient and stealthier than catastrophic forgetting as an attack strategy against medical LLMs.

🛡️ Threat Analysis

Data Poisoning Attack

The core contribution is a data poisoning attack: the attacker injects corrupted rationales into the SFT training dataset to degrade model performance on targeted medical subjects. This is canonical data poisoning — corrupting training data to induce biased/degraded behavior — without trigger-based backdoor semantics.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargetedgrey_box
Datasets
MedQA
Applications
medical llmsmedical question answeringclinical decision support