TFL: Targeted Bit-Flip Attack on Large Language Model
Jingkai Guo , Chaitali Chakrabarti , Deliang Fan
Published on arXiv
2602.17837
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
TFL achieves targeted LLM output manipulation on Qwen, DeepSeek, and Llama models with fewer than 50 bit flips while significantly reducing collateral degradation on unrelated queries compared to prior BFA methods.
TFL
Novel technique introduced
Large language models (LLMs) are increasingly deployed in safety and security critical applications, raising concerns about their robustness to model parameter fault injection attacks. Recent studies have shown that bit-flip attacks (BFAs), which exploit computer main memory (i.e., DRAM) vulnerabilities to flip a small number of bits in model weights, can severely disrupt LLM behavior. However, existing BFA on LLM largely induce un-targeted failure or general performance degradation, offering limited control over manipulating specific or targeted outputs. In this paper, we present TFL, a novel targeted bit-flip attack framework that enables precise manipulation of LLM outputs for selected prompts while maintaining almost no or minor degradation on unrelated inputs. Within our TFL framework, we propose a novel keyword-focused attack loss to promote attacker-specified target tokens in generative outputs, together with an auxiliary utility score that balances attack effectiveness against collateral performance impact on benign data. We evaluate TFL on multiple LLMs (Qwen, DeepSeek, Llama) and benchmarks (DROP, GSM8K, and TriviaQA). The experiments show that TFL achieves successful targeted LLM output manipulations with less than 50 bit flips and significantly reduced effect on unrelated queries compared to prior BFA approaches. This demonstrates the effectiveness of TFL and positions it as a new class of stealthy and targeted LLM model attack.
Key Contributions
- TFL: first targeted bit-flip attack framework for LLMs that induces attacker-specified outputs for selected prompts while minimizing degradation on unrelated queries
- Keyword-focused attack loss that directly optimizes promotion of target tokens in generative LLM outputs
- Auxiliary utility score that balances attack effectiveness against collateral performance impact on benign data during bit-flip selection
🛡️ Threat Analysis
TFL creates hidden, targeted malicious behavior in LLM weights — specific selected prompts trigger attacker-specified output tokens while the model behaves normally on unrelated queries, matching the backdoor/trojan threat profile exactly. The mechanism (hardware Rowhammer bit flips to flip model weight bits) differs from training-time injection but produces the same threat: a stealthy trigger-activated behavioral compromise.