Neha Nagaraja

Papers in Database (1)

attack FLLM Mar 4, 2026 · 4w ago

Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Neha Nagaraja, Lan Zhang, Zhilong Wang et al. · Northern Arizona University · ByteDance

Black-box attack conceals adversarial text instructions inside natural images to hijack multimodal LLM outputs via visual prompt injection

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF