defense 2026

GuardAlign: Test-time Safety Alignment in Multimodal Large Language Models

Xingyu Zhu 1,2, Beier Zhu 2, Junfeng Fang 3, Shuo Wang 1, Yin Zhang 4, Xiang Wang 1, Xiangnan He 1

0 citations

α

Published on arXiv

2602.24027

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

GuardAlign reduces unsafe response rates by up to 39% on SPA-VL across six MLLMs while improving VQAv2 accuracy from 78.51% to 79.21%, with no additional training required.

GuardAlign

Novel technique introduced


Large vision-language models (LVLMs) have achieved remarkable progress in vision-language reasoning tasks, yet ensuring their safety remains a critical challenge. Recent input-side defenses detect unsafe images with CLIP and prepend safety prefixes to prompts, but they still suffer from inaccurate detection in complex scenes and unstable safety signals during decoding. To address these issues, we propose GuardAlign, a training-free defense framework that integrates two strategies. First, OT-enhanced safety detection leverages optimal transport to measure distribution distances between image patches and unsafe semantics, enabling accurate identification of malicious regions without additional computational cost. Second, cross-modal attentive calibration strengthens the influence of safety prefixes by adaptively reallocating attention across layers, ensuring that safety signals remain consistently activated throughout generation. Extensive evaluations on six representative MLLMs demonstrate that GuardAlign reduces unsafe response rates by up to 39% on SPA-VL, while preserving utility, achieving an improvement on VQAv2 from 78.51% to 79.21%.


Key Contributions

  • OT-enhanced safety detection that uses optimal transport to measure distribution distances between image patches and unsafe semantic concepts, pinpointing malicious regions without extra compute
  • Cross-modal attentive calibration that adaptively reallocates attention across transformer layers to keep safety prefix signals consistently activated during generation
  • Training-free framework evaluated on six representative MLLMs, reducing unsafe response rates by up to 39% on SPA-VL while maintaining or improving general utility (VQAv2: 78.51% → 79.21%)

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial/malicious visual inputs to VLMs — images containing harmful or manipulated content targeting 'malicious regions' — which is the adversarial visual attack surface; dual-tagging applies per the multimodal attack rule.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmmultimodal
Threat Tags
inference_timeblack_box
Datasets
SPA-VLVQAv2
Applications
vision-language modelsmultimodal llm safetyvisual question answering