attack 2026

Phantom Transfer: Data-level Defences are Insufficient Against Data Poisoning

Andrew Draganov 1, Tolga H. Dur 1, Anandmayi Bhongade 1, Mary Phuong 2

0 citations · 48 references · arXiv (Cornell University)

α

Published on arXiv

2602.04899

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Data-level defenses including full paraphrasing are insufficient to remove Phantom Transfer poisoning, with password-triggered backdoor behaviors successfully planted in GPT-4.1 and other models.

Phantom Transfer

Novel technique introduced


We present a data poisoning attack -- Phantom Transfer -- with the property that, even if you know precisely how the poison was placed into an otherwise benign dataset, you cannot filter it out. We achieve this by modifying subliminal learning to work in real-world contexts and demonstrate that the attack works across models, including GPT-4.1. Indeed, even fully paraphrasing every sample in the dataset using a different model does not stop the attack. We also discuss connections to steering vectors and show that one can plant password-triggered behaviours into models while still beating defences. This suggests that data-level defences are insufficient for stopping sophisticated data poisoning attacks. We suggest that future work should focus on model audits and white-box security methods.


Key Contributions

  • Phantom Transfer attack: a data poisoning method that cannot be filtered even with full knowledge of the poisoning process, achieved by adapting subliminal learning to real-world contexts
  • Demonstration that the attack survives full dataset paraphrasing by a different model, rendering a common data-level defense ineffective
  • Empirical evidence that password-triggered backdoor behaviors can be planted via data poisoning while defeating existing defenses, including across GPT-4.1

🛡️ Threat Analysis

Data Poisoning Attack

Primary contribution is a data poisoning attack (Phantom Transfer) that corrupts training data in a way that cannot be filtered out even with full knowledge of how the poison was placed — directly targeting the training data pipeline and demonstrating that data-level defenses (including full paraphrasing) are insufficient.

Model Poisoning

The paper explicitly demonstrates planting password-triggered behaviors into models via the poisoning technique — a classic backdoor/trojan attack where specific trigger inputs activate hidden malicious behavior while the model behaves normally otherwise.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargetedblack_box
Applications
language model fine-tuninginstruction tuningllm safety alignment