Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Alexandra Souly 1, Javier Rando 2,3, Ed Chapman 4, Xander Davies 1,5, Burak Hasircioglu 4, Ezzeldin Shereen 4, Carlos Mougan 4, Vasilios Mavroudis 4, Erik Jones 2, Chris Hicks 4, Nicholas Carlini 2, Yarin Gal 5, Robert Kirk 1
Published on arXiv
2510.07192
Model Poisoning
OWASP ML Top 10 — ML10
Data Poisoning Attack
OWASP ML Top 10 — ML02
Training Data Poisoning
OWASP LLM Top 10 — LLM03
Key Finding
250 poisoned documents successfully inject a backdoor into LLMs from 600M to 13B parameters, even when the largest models train on 20× more clean data than the smallest.
Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversaries control a percentage of the training corpus. However, for large models, even small percentages translate to impractically large amounts of data. This work demonstrates for the first time that poisoning attacks instead require a near-constant number of documents regardless of dataset size. We conduct the largest pretraining poisoning experiments to date, pretraining models from 600M to 13B parameters on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data. We also run smaller-scale experiments to ablate factors that could influence attack success, including broader ratios of poisoned to clean data and non-random distributions of poisoned samples. Finally, we demonstrate the same dynamics for poisoning during fine-tuning. Altogether, our results suggest that injecting backdoors through data poisoning may be easier for large models than previously believed as the number of poisons required does not scale up with model size, highlighting the need for more research on defences to mitigate this risk in future models.
Key Contributions
- First demonstration that LLM backdoor poisoning requires a near-constant number of documents (~250) regardless of dataset or model size, from 600M to 13B parameters.
- Largest pretraining poisoning experiments to date, training on Chinchilla-optimal datasets (6B–260B tokens), showing attack success is determined by absolute poison count rather than poisoning percentage.
- Ablation studies on poisoning ratio, per-batch density, continued pretraining on clean data, and fine-tuning stage, all confirming the near-constant sample requirement finding.
🛡️ Threat Analysis
The attack vector is data poisoning of the training corpus (injecting malicious documents into pretraining and fine-tuning data), which is what ML02 covers; the paper ablates poisoning ratio, density, and distribution effects on attack success.
Core contribution is backdoor injection — trigger-activated malicious behavior embedded via poisoned pretraining/fine-tuning data, studied across 600M–13B parameter models with near-constant poison counts.