defense 2026

Directional Embedding Smoothing for Robust Vision Language Models

Ye Wang , Jing Liu , Toshiaki Koike-Akino

0 citations

α

Published on arXiv

2603.15259

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RESTA with directional embedding noise effectively reduces jailbreak attack success rate on JailBreakV-28K benchmark while serving as a lightweight inference-time defense layer

RESTA (Randomized Embedding Smoothing and Token Aggregation) for VLMs

Novel technique introduced


The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.


Key Contributions

  • Extends RESTA defense from LLMs to vision-language models by perturbing embeddings in the shared embedding space
  • Demonstrates that directional embedding noise (aligned with original token vectors) outperforms isotropic noise for VLM defense
  • Evaluates defense effectiveness against JailBreakV-28K benchmark showing reduced attack success rates while maintaining utility

🛡️ Threat Analysis

Input Manipulation Attack

The paper defends against jailbreaking attacks on VLMs, which involve adversarial manipulation of inputs (both visual and textual) to cause unsafe outputs. The defense operates at inference time by perturbing embeddings to prevent adversarial inputs from bypassing safety alignment.


Details

Domains
multimodalnlpvision
Model Types
vlmllmtransformer
Threat Tags
inference_timedigital
Datasets
JailBreakV-28K
Applications
vision-language modelsagentic ai systemsmulti-modal foundation models