The Trigger in the Haystack: Extracting and Reconstructing LLM Backdoor Triggers
Blake Bullwinkel , Giorgio Severi , Keegan Hines , Amanda Minnich , Ram Shankar Siva Kumar , Yonatan Zunger
Published on arXiv
2602.03085
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
The scanner recovers working backdoor triggers across multiple backdoor scenarios and a broad range of LLMs and fine-tuning methods without requiring labeled backdoor examples or knowledge of the poisoning task.
Detecting whether a model has been poisoned is a longstanding problem in AI security. In this work, we present a practical scanner for identifying sleeper agent-style backdoors in causal language models. Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques. Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input. Guided by these observations, we develop a scalable backdoor scanning methodology that assumes no prior knowledge of the trigger or target behavior and requires only inference operations. Our scanner integrates naturally into broader defensive strategies and does not alter model performance. We show that our method recovers working triggers across multiple backdoor scenarios and a broad range of models and fine-tuning methods.
Key Contributions
- Demonstrates that sleeper agent LLMs memorize poisoning data, enabling extraction of full backdoor examples (trigger, prompt, target output) via memory extraction techniques
- Identifies distinctive output distribution and attention head patterns in backdoored LLMs when triggers are present in the input
- Develops a scalable backdoor scanning methodology requiring only inference operations and no prior knowledge of trigger identity or target behavior
🛡️ Threat Analysis
The paper is fundamentally a defense against backdoor/trojan attacks in LLMs. It proposes a scanner that detects sleeper agent-style backdoors and recovers working backdoor triggers without prior knowledge of the trigger or target behavior — directly addressing the ML10 threat of hidden, trigger-activated malicious behavior embedded in models.