attack 2026

When Safety Becomes a Vulnerability: Exploiting LLM Alignment Homogeneity for Transferable Blocking in RAG

Junchen Li 1, Chao Qi 1, Rongzheng Wang 1, Qizhi Chen 1, Liang Xu 1, Di Liang 2,3, Bob Simons 3, Shuang Liang 1

0 citations

α

Published on arXiv

2603.03919

Data Poisoning Attack

OWASP ML Top 10 — ML02

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

TabooRAG achieves up to 96% blocking success rate on GPT-5.2 and stable cross-model transferability across 7 modern LLMs under a strict black-box setting.

TabooRAG

Novel technique introduced


Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance on potentially poisonable knowledge bases introduces new availability risks. Attackers can inject documents that cause LLMs to refuse benign queries, attacks known as blocking attacks. Prior blocking attacks relying on adversarial suffixes or explicit instruction injection are increasingly ineffective against modern safety-aligned LLMs. We observe that safety-aligned LLMs exhibit heightened sensitivity to query-relevant risk signals, causing alignment mechanisms designed for harm prevention to become a source of exploitable refusal. Moreover, mainstream alignment practices share overlapping risk categories and refusal criteria, a phenomenon we term alignment homogeneity, enabling restricted risk context constructed on an accessible LLM to transfer across LLMs. Based on this insight, we propose TabooRAG, a transferable blocking attack framework operating under a strict black-box setting. An attacker can generate a single retrievable blocking document per query by optimizing against a surrogate LLM in an accessible RAG environment, and directly transfer it to an unknown target RAG system without access to the target model. We further introduce a query-aware strategy library to reuse previously effective strategies and improve optimization efficiency. Experiments across 7 modern LLMs and 3 datasets demonstrate that TabooRAG achieves stable cross-model transferability and state-of-the-art blocking success rates, reaching up to 96% on GPT-5.2. Our findings show that increasingly standardized safety alignment across modern LLMs creates a shared and transferable attack surface in RAG systems, revealing a need for improved defenses.


Key Contributions

  • Identifies 'alignment homogeneity' — the observation that mainstream LLMs share overlapping safety risk categories and refusal criteria — as a transferable attack surface in RAG systems.
  • Proposes TabooRAG, a black-box transferable blocking attack that optimizes a single retrievable document against a surrogate LLM and transfers it directly to unknown target RAG systems without model access.
  • Introduces a query-aware strategy library to reuse previously effective blocking strategies and improve optimization efficiency across queries.

🛡️ Threat Analysis

Data Poisoning Attack

The attack injects adversarial documents into the RAG knowledge base — a direct data poisoning attack on the retrieval corpus that degrades system availability by causing refusals on benign queries.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
NQHotpotQAMS MARCO
Applications
retrieval-augmented generationllm knowledge-intensive question answering