defense 2026

Evolving Contextual Safety in Multi-Modal Large Language Models via Inference-Time Self-Reflective Memory

Ce Zhang , Jinxi He , Junyi He , Katia Sycara , Yaqi Xie

0 citations

α

Published on arXiv

2603.15800

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

EchoSafe achieves superior performance across various multi-modal safety benchmarks by enabling context-aware safety reasoning through inference-time memory retrieval

EchoSafe

Novel technique introduced


Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability to safety risks remains a pressing concern. While prior research primarily focuses on jailbreak defenses that detect and refuse explicitly unsafe inputs, such approaches often overlook contextual safety, which requires models to distinguish subtle contextual differences between scenarios that may appear similar but diverge significantly in safety intent. In this work, we present MM-SafetyBench++, a carefully curated benchmark designed for contextual safety evaluation. Specifically, for each unsafe image-text pair, we construct a corresponding safe counterpart through minimal modifications that flip the user intent while preserving the underlying contextual meaning, enabling controlled evaluation of whether models can adapt their safety behaviors based on contextual understanding. Further, we introduce EchoSafe, a training-free framework that maintains a self-reflective memory bank to accumulate and retrieve safety insights from prior interactions. By integrating relevant past experiences into current prompts, EchoSafe enables context-aware reasoning and continual evolution of safety behavior during inference. Extensive experiments on various multi-modal safety benchmarks demonstrate that EchoSafe consistently achieves superior performance, establishing a strong baseline for advancing contextual safety in MLLMs. All benchmark data and code are available at https://echosafe-mllm.github.io.


Key Contributions

  • MM-SafetyBench++: a contextual safety benchmark with minimal-modification safe/unsafe pairs to test context-aware safety
  • EchoSafe: training-free framework using self-reflective memory bank to accumulate and retrieve safety insights from prior interactions
  • Demonstrates superior performance on multi-modal safety benchmarks through context-aware reasoning at inference time

🛡️ Threat Analysis


Details

Domains
multimodalnlpvision
Model Types
vlmllmmultimodal
Threat Tags
inference_time
Datasets
MM-SafetyBench++
Applications
multi-modal chatbotsvision-language assistantsmllm safety