attack 2025

Embedding Poisoning: Bypassing Safety Alignment via Embedding Semantic Shift

Shuai Yuan 1, Zhibo Zhang 2, Yuxi Li 2, Guangdong Bai 3, Wang Kailong 2

0 citations

α

Published on arXiv

2509.06338

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SEP achieves an average attack success rate of 96.43% across six aligned LLMs by injecting imperceptible embedding perturbations at inference time while evading standard detection.

SEP (Search-based Embedding Poisoning)

Novel technique introduced


The widespread distribution of Large Language Models (LLMs) through public platforms like Hugging Face introduces significant security challenges. While these platforms perform basic security scans, they often fail to detect subtle manipulations within the embedding layer. This work identifies a novel class of deployment phase attacks that exploit this vulnerability by injecting imperceptible perturbations directly into the embedding layer outputs without modifying model weights or input text. These perturbations, though statistically benign, systematically bypass safety alignment mechanisms and induce harmful behaviors during inference. We propose Search based Embedding Poisoning(SEP), a practical, model agnostic framework that introduces carefully optimized perturbations into embeddings associated with high risk tokens. SEP leverages a predictable linear transition in model responses, from refusal to harmful output to semantic deviation to identify a narrow perturbation window that evades alignment safeguards. Evaluated across six aligned LLMs, SEP achieves an average attack success rate of 96.43% while preserving benign task performance and evading conventional detection mechanisms. Our findings reveal a critical oversight in deployment security and emphasize the urgent need for embedding level integrity checks in future LLM defense strategies.


Key Contributions

  • Identifies a novel deployment-phase attack surface: embedding layer outputs can be perturbed without weight modification, evading static security scans on platforms like Hugging Face
  • Proposes SEP (Search-based Embedding Poisoning), a model-agnostic framework that exploits a predictable linear transition from refusal to harmful output to identify a narrow optimal perturbation window
  • Demonstrates 96.43% average attack success rate across six aligned LLMs while preserving benign task performance and evading conventional detection mechanisms

🛡️ Threat Analysis

Input Manipulation Attack

SEP injects carefully optimized perturbations into embedding layer outputs at inference time — the core technique is adversarial perturbation engineering (analogous to adversarial suffix optimization) that causes misaligned model outputs without modifying model weights or input text.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
AdvBench
Applications
large language model safetyllm alignment