PoTS: Proof-of-Training-Steps for Backdoor Detection in Large Language Models
Issam Seddik 1,2, Sami Souihi 1,2, Mohamed Tamaazousti 1,2, Sara Tucci Piergiovanni 1,2
Published on arXiv
2510.15106
Model Poisoning
OWASP ML Top 10 — ML10
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
PoTS detects backdoor injection at the exact poisoned training step with verification 3x faster than a training step, reliably catching attacks at a 10% poisoning rate using only 30% of the verification budget.
PoTS (Proof-of-Training-Steps)
Novel technique introduced
As Large Language Models (LLMs) gain traction across critical domains, ensuring secure and trustworthy training processes has become a major concern. Backdoor attacks, where malicious actors inject hidden triggers into training data, are particularly insidious and difficult to detect. Existing post-training verification solutions like Proof-of-Learning are impractical for LLMs due to their requirement for full retraining, lack of robustness against stealthy manipulations, and inability to provide early detection during training. Early detection would significantly reduce computational costs. To address these limitations, we introduce Proof-of-Training Steps, a verification protocol that enables an independent auditor (Alice) to confirm that an LLM developer (Bob) has followed the declared training recipe, including data batches, architecture, and hyperparameters. By analyzing the sensitivity of the LLMs' language modeling head (LM-Head) to input perturbations, our method can expose subtle backdoor injections or deviations in training. Even with backdoor triggers in up to 10 percent of the training data, our protocol significantly reduces the attacker's ability to achieve a high attack success rate (ASR). Our method enables early detection of attacks at the injection step, with verification steps being 3x faster than training steps. Our results highlight the protocol's potential to enhance the accountability and security of LLM development, especially against insider threats.
Key Contributions
- PoTS verification protocol enabling step-by-step auditing of LLM training, detecting backdoor injection at the exact training step it is introduced rather than post-training
- Demonstrates that analyzing only the LM-Head layer's sensitivity to input perturbations is sufficient for reliable backdoor detection, reducing verification cost by up to 70% (3x faster than a training step)
- Empirically validates detection of backdoor attacks with poisoning rates as low as 10% while using only 30% of total training cost for verification across Llama, Falcon, and Qwen architectures
🛡️ Threat Analysis
The paper explicitly frames the attack as a Data Poisoning Attack (DPA) — adversaries poison up to 10% of training batches with trigger sequences. The defense is evaluated against this training-data-level poisoning mechanism, not just post-hoc backdoor presence.
The paper's primary contribution is a defense against backdoor attacks (Targeted Refusal and Jailbreaking trigger categories) embedded during LLM training. PoTS detects hidden trigger injections at the exact training step they are inserted by analyzing LM-Head weight sensitivity to perturbations.