defense 2026

Zero-Shot Embedding Drift Detection: A Lightweight Defense Against Prompt Injections in LLMs

Anirudh Sekar 1, Mrinal Agarwal 1, Rachel Sharma 1, Akitsugu Tanaka 1, Jasmine Zhang 1, Arjun Damerla 2, Kevin Zhu 1

0 citations · 22 references · arXiv

α

Published on arXiv

2601.12359

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves >93% accuracy and <3% false positive rate in detecting prompt injections across Llama 3, Qwen 2, and Mistral without retraining or model access

ZEDD (Zero-Shot Embedding Drift Detection)

Novel technique introduced


Prompt injection attacks have become an increasing vulnerability for LLM applications, where adversarial prompts exploit indirect input channels such as emails or user-generated content to circumvent alignment safeguards and induce harmful or unintended outputs. Despite advances in alignment, even state-of-the-art LLMs remain broadly vulnerable to adversarial prompts, underscoring the urgent need for robust, productive, and generalizable detection mechanisms beyond inefficient, model-specific patches. In this work, we propose Zero-Shot Embedding Drift Detection (ZEDD), a lightweight, low-engineering-overhead framework that identifies both direct and indirect prompt injection attempts by quantifying semantic shifts in embedding space between benign and suspect inputs. ZEDD operates without requiring access to model internals, prior knowledge of attack types, or task-specific retraining, enabling efficient zero-shot deployment across diverse LLM architectures. Our method uses adversarial-clean prompt pairs and measures embedding drift via cosine similarity to capture subtle adversarial manipulations inherent to real-world injection attacks. To ensure robust evaluation, we assemble and re-annotate the comprehensive LLMail-Inject dataset spanning five injection categories derived from publicly available sources. Extensive experiments demonstrate that embedding drift is a robust and transferable signal, outperforming traditional methods in detection accuracy and operational efficiency. With greater than 93% accuracy in classifying prompt injections across model architectures like Llama 3, Qwen 2, and Mistral and a false positive rate of <3%, our approach offers a lightweight, scalable defense layer that integrates into existing LLM pipelines, addressing a critical gap in securing LLM-powered systems to withstand adaptive adversarial threats.


Key Contributions

  • Zero-shot prompt injection detection via cosine similarity of embedding drift between clean and candidate prompts, requiring no model internals, retraining, or prior attack knowledge
  • GMM and KDE-based distribution analysis for flagging injected prompts while minimizing false positives
  • Empirical evaluation on re-annotated LLMail-Inject dataset across Llama 3, Qwen 2, and Mistral showing >93% accuracy and <3% false positive rate

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
LLMail-Inject
Applications
llm-integrated applicationsemail assistantsllm pipelines