attack arXiv Oct 20, 2025 · Oct 2025
Elias Hossain, Swayamjit Saha, Somshubhra Roy et al. · University of Central Florida · Mississippi State University +1 more
Attacks LLM inference by corrupting KV cache key vectors at runtime, bypassing prompt filters and causing output degradation across GPT-2 and LLaMA-2
Input Manipulation Attack nlp
Even when prompts and parameters are secured, transformer language models remain vulnerable because their key-value (KV) cache during inference constitutes an overlooked attack surface. This paper introduces Malicious Token Injection (MTI), a modular framework that systematically perturbs cached key vectors at selected layers and timesteps through controlled magnitude and frequency, using additive Gaussian noise, zeroing, and orthogonal rotations. A theoretical analysis quantifies how these perturbations propagate through attention, linking logit deviations to the Frobenius norm of corruption and softmax Lipschitz dynamics. Empirical results show that MTI significantly alters next-token distributions and downstream task performance across GPT-2 and LLaMA-2/7B, as well as destabilizes retrieval-augmented and agentic reasoning pipelines. These findings identify cache integrity as a critical yet underexplored vulnerability in current LLM deployments, positioning cache corruption as a reproducible and theoretically grounded threat model for future robustness and security research.
llm transformer University of Central Florida · Mississippi State University · North Carolina State University