attack 2026

SAVeS: Steering Safety Judgments in Vision-Language Models via Semantic Cues

Carlos Hinojosa 1, Clemens Grange 1,2, Bernard Ghanem 1

0 citations

α

Published on arXiv

2603.19092

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

VLM safety judgments can be systematically manipulated via semantic cues, revealing reliance on learned associations rather than grounded visual understanding across multiple state-of-the-art models

Semantic Steering Framework

Novel technique introduced


Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However, it remains unclear which visual evidence drives these judgments. We study whether multimodal safety behavior in VLMs can be steered by simple semantic cues. We introduce a semantic steering framework that applies controlled textual, visual, and cognitive interventions without changing the underlying scene content. To evaluate these effects, we propose SAVeS, a benchmark for situational safety under semantic cues, together with an evaluation protocol that separates behavioral refusal, grounded safety reasoning, and false refusals. Experiments across multiple VLMs and an additional state-of-the-art benchmark show that safety decisions are highly sensitive to semantic cues, indicating reliance on learned visual-linguistic associations rather than grounded visual understanding. We further demonstrate that automated steering pipelines can exploit these mechanisms, highlighting a potential vulnerability in multimodal safety systems.


Key Contributions

  • Introduces SAVeS benchmark for evaluating situational safety under semantic cues with protocol separating behavioral refusal, grounded reasoning, and false refusals
  • Demonstrates VLM safety decisions are highly sensitive to semantic cues (textual, visual, cognitive) without changing underlying scene content
  • Shows automated steering pipelines can exploit learned visual-linguistic associations to bypass multimodal safety systems

🛡️ Threat Analysis

Input Manipulation Attack

The paper demonstrates semantic manipulation attacks on VLM safety systems using textual and visual interventions that cause the model to change safety decisions without altering scene content—this is input manipulation causing incorrect safety outputs at inference time.


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timeblack_boxtargeted
Datasets
SAVeS
Applications
vision-language modelsmultimodal safety systemsembodied ai safety