defense 2025

Risk-adaptive Activation Steering for Safe Multimodal Large Language Models

Jonghyun Park 1, Minhyuk Seo 1,2, Jonghyun Choi 1

1 citations · 48 references · arXiv

α

Published on arXiv

2510.13698

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RAS significantly reduces multimodal jailbreak attack success rates while preserving general task performance and improving inference speed compared to prior inference-time defenses on LLaVA-1.5, Qwen-VL-Chat, and InternLM-XComposer-2.5

Risk-adaptive Activation Steering (RAS)

Novel technique introduced


One of the key challenges of modern AI models is ensuring that they provide helpful responses to benign queries while refusing malicious ones. But often, the models are vulnerable to multimodal queries with harmful intent embedded in images. One approach for safety alignment is training with extensive safety datasets at the significant costs in both dataset curation and training. Inference-time alignment mitigates these costs, but introduces two drawbacks: excessive refusals from misclassified benign queries and slower inference speed due to iterative output adjustments. To overcome these limitations, we propose to reformulate queries to strengthen cross-modal attention to safety-critical image regions, enabling accurate risk assessment at the query level. Using the assessed risk, it adaptively steers activations to generate responses that are safe and helpful without overhead from iterative output adjustments. We call this Risk-adaptive Activation Steering (RAS). Extensive experiments across multiple benchmarks on multimodal safety and utility demonstrate that the RAS significantly reduces attack success rates, preserves general task performance, and improves inference speed over prior inference-time defenses.


Key Contributions

  • Query reformulation that strengthens cross-modal attention to safety-critical image regions for accurate query-level risk assessment
  • Risk-adaptive activation steering that adjusts internal model activations based on assessed risk, avoiding costly iterative output generation
  • Demonstrated reduction in attack success rates across MM-SafetyBench, SPA-VL, and FigStep while preserving utility and improving inference speed over prior inference-time defenses

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial visual inputs to VLMs where harmful intent is embedded in images to elicit unsafe outputs — the attack vector is manipulated/harmful image inputs at inference time, consistent with the dual ML01+LLM01 tagging rule for adversarial visual inputs to VLMs.


Details

Domains
multimodalvisionnlp
Model Types
vlmllmmultimodal
Threat Tags
inference_timewhite_box
Datasets
MM-SafetyBenchSPA-VLFigStepSci-QAMM-VetGQAMME
Applications
multimodal large language modelsvlm safety alignment