Latest papers

2 papers
attack FLLM Mar 4, 2026 · 4w ago

Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Neha Nagaraja, Lan Zhang, Zhilong Wang et al. · Northern Arizona University · ByteDance

Black-box attack conceals adversarial text instructions inside natural images to hijack multimodal LLM outputs via visual prompt injection

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF
defense arXiv Jan 10, 2026 · 12w ago

Burn-After-Use for Preventing Data Leakage through a Secure Multi-Tenant Architecture in Enterprise LLM

Qiang Zhang, Elena Emma Wang, Jiaming Li et al. · Northern Arizona University · American Heritage Academy

Proposes tenant isolation and ephemeral context destruction architecture to prevent cross-session data leakage in enterprise LLMs

Sensitive Information Disclosure nlp
1 citations PDF