Medusa: Cross-Modal Transferable Adversarial Attacks on Multimodal Medical Retrieval-Augmented Generation
Yingjia Shang 1,2, Yi Liu 3, Huimin Wang 4, Furong Li 1, Wenfang Sun 1, Wu Chengyu 1, Yefeng Zheng 1
Published on arXiv
2511.19257
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves over 90% average attack success rate across diverse MMed-RAG generation models and retrievers while remaining effective against bit-depth reduction, random resizing, and DiffPure defenses.
Medusa
Novel technique introduced
With the rapid advancement of retrieval-augmented vision-language models, multimodal medical retrieval-augmented generation (MMed-RAG) systems are increasingly adopted in clinical decision support. These systems enhance medical applications by performing cross-modal retrieval to integrate relevant visual and textual evidence for tasks, e.g., report generation and disease diagnosis. However, their complex architecture also introduces underexplored adversarial vulnerabilities, particularly via visual input perturbations. In this paper, we propose Medusa, a novel framework for crafting cross-modal transferable adversarial attacks on MMed-RAG systems under a black-box setting. Specifically, Medusa formulates the attack as a perturbation optimization problem, leveraging a multi-positive InfoNCE loss (MPIL) to align adversarial visual embeddings with medically plausible but malicious textual targets, thereby hijacking the retrieval process. To enhance transferability, we adopt a surrogate model ensemble and design a dual-loop optimization strategy augmented with invariant risk minimization (IRM). Extensive experiments on two real-world medical tasks, including medical report generation and disease diagnosis, demonstrate that Medusa achieves over 90% average attack success rate across various generation models and retrievers under appropriate parameter configuration, while remaining robust against four mainstream defenses, outperforming state-of-the-art baselines. Our results reveal critical vulnerabilities in the MMed-RAG systems and highlight the necessity of robustness benchmarking in safety-critical medical applications. The code and data are available at https://anonymous.4open.science/r/MMed-RAG-Attack-F05A.
Key Contributions
- Multi-Positive InfoNCE Loss (MPIL) that aligns adversarial visual embeddings with attacker-specified malicious textual targets to hijack cross-modal retrieval in MMed-RAG systems
- Dual-loop optimization strategy augmented with Invariant Risk Minimization (IRM) over a surrogate model ensemble to enhance black-box transferability across unseen retrievers and generators
- Empirical demonstration of >90% attack success rate on medical report generation and disease diagnosis tasks, robust against four mainstream defenses including DiffPure
🛡️ Threat Analysis
Medusa crafts gradient-optimized adversarial visual perturbations that manipulate the embedding space of MMed-RAG systems at inference time, causing the retrieval process to surface attacker-chosen malicious textual context — a textbook adversarial input manipulation attack with novel MPIL and dual-loop IRM optimization components.