attack 2025

HV-Attack: Hierarchical Visual Attack for Multimodal Retrieval Augmented Generation

Linyin Luo 1,2, Yujuan Ding 1, Yunshan Ma 3, Wenqi Fan 1, Hanjiang Lai 2

1 citations · 42 references · arXiv

α

Published on arXiv

2511.15435

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Imperceptible adversarial image perturbations significantly degrade both retrieval recall and answer generation quality in MRAG systems based on BLIP-2 and LLaVA across OK-VQA and InfoSeek benchmarks

HV-Attack

Novel technique introduced


Advanced multimodal Retrieval-Augmented Generation (MRAG) techniques have been widely applied to enhance the capabilities of Large Multimodal Models (LMMs), but they also bring along novel safety issues. Existing adversarial research has revealed the vulnerability of MRAG systems to knowledge poisoning attacks, which fool the retriever into recalling injected poisoned contents. However, our work considers a different setting: visual attack of MRAG by solely adding imperceptible perturbations at the image inputs of users, without manipulating any other components. This is challenging due to the robustness of fine-tuned retrievers and large-scale generators, and the effect of visual perturbation may be further weakened by propagation through the RAG chain. We propose a novel Hierarchical Visual Attack that misaligns and disrupts the two inputs (the multimodal query and the augmented knowledge) of MRAG's generator to confuse its generation. We further design a hierarchical two-stage strategy to obtain misaligned augmented knowledge. We disrupt the image input of the retriever to make it recall irrelevant knowledge from the original database, by optimizing the perturbation which first breaks the cross-modal alignment and then disrupts the multimodal semantic alignment. We conduct extensive experiments on two widely-used MRAG datasets: OK-VQA and InfoSeek. We use CLIP-based retrievers and two LMMs BLIP-2 and LLaVA as generators. Results demonstrate the effectiveness of our visual attack on MRAG through the significant decrease in both retrieval and generation performance.


Key Contributions

  • Novel HV-Attack that adds imperceptible adversarial perturbations exclusively to user image inputs to degrade MRAG systems without modifying any database or system component
  • Hierarchical two-stage optimization strategy: first breaks cross-modal alignment (visual-to-text) then disrupts multimodal semantic alignment to cause irrelevant knowledge retrieval
  • Empirical validation on OK-VQA and InfoSeek with CLIP-based retrievers and BLIP-2/LLaVA generators, demonstrating significant drops in both retrieval and generation performance

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is crafting gradient-based adversarial perturbations applied to user image inputs at inference time — a direct input manipulation attack that causes the CLIP retriever to surface irrelevant knowledge and ultimately confuses the LMM generator.


Details

Domains
visionmultimodalnlp
Model Types
vlmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
OK-VQAInfoSeek
Applications
multimodal retrieval-augmented generationvisual question answering