Latest papers

2 papers
attack arXiv Aug 24, 2025 · Aug 2025

How to make Medical AI Systems safer? Simulating Vulnerabilities, and Threats in Multimodal Medical RAG System

Kaiwen Zuo, Zelin Liu, Raman Dutt et al. · University of Warwick · Shanghai Jiao Tong University +5 more

Poisons medical RAG knowledge bases with adversarial image-text pairs to degrade LLaVA-Med-1.5 diagnostic outputs by up to 27.66% F1

Data Poisoning Attack Prompt Injection multimodalvisionnlp
PDF
defense arXiv Aug 4, 2025 · Aug 2025

Defending Against Knowledge Poisoning Attacks During Retrieval-Augmented Generation

Kennedy Edemacu, Vinay M. Shashidhar, Micheal Tuape et al. · The City University of New York · Northern Michigan University +4 more

Defends RAG systems against knowledge poisoning by filtering adversarial texts from retrieved context before LLM generation

Data Poisoning Attack Prompt Injection nlp
PDF