attack 2025

Eyes-on-Me: Scalable RAG Poisoning through Transferable Attention-Steering Attractors

Yen-Shan Chen 1,2, Sian-Yao Huang 1, Cheng-Lin Yang 1, Yun-Nung Chen 2

0 citations · 50 references · arXiv

α

Published on arXiv

2510.00586

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Raises average RAG attack success rate from 21.9% to 57.8% (2.6× improvement over prior work) across 18 end-to-end settings, with a single attractor transferring to unseen black-box models without retraining.

Eyes-on-Me

Novel technique introduced


Existing data poisoning attacks on retrieval-augmented generation (RAG) systems scale poorly because they require costly optimization of poisoned documents for each target phrase. We introduce Eyes-on-Me, a modular attack that decomposes an adversarial document into reusable Attention Attractors and Focus Regions. Attractors are optimized to direct attention to the Focus Region. Attackers can then insert semantic baits for the retriever or malicious instructions for the generator, adapting to new targets at near zero cost. This is achieved by steering a small subset of attention heads that we empirically identify as strongly correlated with attack success. Across 18 end-to-end RAG settings (3 datasets $\times$ 2 retrievers $\times$ 3 generators), Eyes-on-Me raises average attack success rates from 21.9 to 57.8 (+35.9 points, 2.6$\times$ over prior work). A single optimized attractor transfers to unseen black box retrievers and generators without retraining. Our findings establish a scalable paradigm for RAG data poisoning and show that modular, reusable components pose a practical threat to modern AI systems. They also reveal a strong link between attention concentration and model outputs, informing interpretability research.


Key Contributions

  • Modular attack decomposing adversarial documents into reusable Attention Attractors (optimized once) and swappable Focus Regions (near-zero cost per new target), enabling scalable RAG poisoning
  • Empirical identification of a small subset of attention heads strongly correlated with attack success, enabling efficient gradient-based attractor optimization
  • Single optimized attractor transfers to unseen black-box retrievers and generators, raising average attack success rates from 21.9% to 57.8% (+35.9 pts, 2.6× over prior work) across 18 RAG settings

🛡️ Threat Analysis

Input Manipulation Attack

Adversarially crafted documents (with gradient-optimized Attention Attractors) are injected into the RAG corpus to manipulate LLM outputs at inference time — this is adversarial document injection for RAG, which the spec explicitly lists as ML01+LLM01. The Attention Attractors are optimized perturbations that steer specific LLM attention heads toward malicious Focus Regions.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeteddigital
Datasets
3 unnamed datasets (3 datasets × 2 retrievers × 3 generators = 18 settings)
Applications
retrieval-augmented generationquestion answering