α

Published on arXiv

2509.16088

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Certified robustness for state-of-the-art VLMs against jailbreak-style adversarial image perturbations is achievable with 2–3 orders of magnitude fewer noisy samples than standard RS, with minimal loss in certified radius.

Randomized Smoothing for VLMs (oracle-based RS)

Novel technique introduced


Randomized smoothing (RS) is one of the prominent techniques to ensure the correctness of machine learning models, where point-wise robustness certificates can be derived analytically. While RS is well understood for classification, its application to generative models is unclear, since their outputs are sequences rather than labels. We resolve this by connecting generative outputs to an oracle classification task and showing that RS can still be enabled: the final response can be classified as a discrete action (e.g., service-robot commands in VLAs), as harmful vs. harmless (content moderation or toxicity detection in VLMs), or even applying oracles to cluster answers into semantically equivalent ones. Provided that the error rate for the oracle classifier comparison is bounded, we develop the theory that associates the number of samples with the corresponding robustness radius. We further derive improved scaling laws analytically relating the certified radius and accuracy to the number of samples, showing that the earlier result of 2 to 3 orders of magnitude fewer samples sufficing with minimal loss remains valid even under weaker assumptions. Together, these advances make robustness certification both well-defined and computationally feasible for state-of-the-art VLMs, as validated against recent jailbreak-style adversarial attacks.


Key Contributions

  • Reformulates Randomized Smoothing for generative/VLM settings by introducing an oracle classification layer over model outputs (harmful/harmless, discrete actions, semantic equivalence clusters)
  • Proves formal robustness certificates for VLMs under a bounded oracle error rate, migrating prior classification RS theory to the generative setting
  • Derives improved scaling laws showing 2–3 orders of magnitude fewer samples suffice for tight certificates, making RS computationally feasible for large VLMs

🛡️ Threat Analysis

Input Manipulation Attack

The paper's primary contribution is a certified defense against adversarial input perturbations (image-space perturbations) for VLMs, validated against adversarial attacks at inference time. Randomized Smoothing is canonically a robustness certification technique against adversarial examples / input manipulation attacks.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmtransformer
Threat Tags
white_boxinference_timedigital
Applications
vision-language modelsvision-language-action modelscontent moderationjailbreak defense