defense 2026

Dictionary-Aligned Concept Control for Safeguarding Multimodal LLMs

Jinqi Luo 1,2, Jinyu Yang 2, Tal Neiman 2, Lei Fan 2, Bing Yin 2, Son Tran 2, Mubarak Shah 2,3, René Vidal 1,2

0 citations

α

Published on arXiv

2604.08846

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Significantly improves MLLM safety across QwenVL, LLaVA, and InternVL on safety benchmarks while maintaining general-purpose capabilities

DACO

Novel technique introduced


Multimodal Large Language Models (MLLMs) have been shown to be vulnerable to malicious queries that can elicit unsafe responses. Recent work uses prompt engineering, response classification, or finetuning to improve MLLM safety. Nevertheless, such approaches are often ineffective against evolving malicious patterns, may require rerunning the query, or demand heavy computational resources. Steering the activations of a frozen model at inference time has recently emerged as a flexible and effective solution. However, existing steering methods for MLLMs typically handle only a narrow set of safety-related concepts or struggle to adjust specific concepts without affecting others. To address these challenges, we introduce Dictionary-Aligned Concept Control (DACO), a framework that utilizes a curated concept dictionary and a Sparse Autoencoder (SAE) to provide granular control over MLLM activations. First, we curate a dictionary of 15,000 multimodal concepts by retrieving over 400,000 caption-image stimuli and summarizing their activations into concept directions. We name the dataset DACO-400K. Second, we show that the curated dictionary can be used to intervene activations via sparse coding. Third, we propose a new steering approach that uses our dictionary to initialize the training of an SAE and automatically annotate the semantics of the SAE atoms for safeguarding MLLMs. Experiments on multiple MLLMs (e.g., QwenVL, LLaVA, InternVL) across safety benchmarks (e.g., MM-SafetyBench, JailBreakV) show that DACO significantly improves MLLM safety while maintaining general-purpose capabilities.


Key Contributions

  • DACO-400K dataset: 15,000 multimodal concepts curated from 400K+ caption-image pairs for concept steering
  • Dictionary-aligned SAE training method that enables granular control over MLLM activations for safety
  • Demonstrated improved safety on MM-SafetyBench and JailBreakV while maintaining general capabilities across multiple MLLMs

🛡️ Threat Analysis


Details

Domains
nlpvisionmultimodal
Model Types
llmvlmmultimodaltransformer
Threat Tags
inference_time
Datasets
DACO-400KMM-SafetyBenchJailBreakV
Applications
multimodal chatbotsvision-language models