attack 2026

Shape and Substance: Dual-Layer Side-Channel Attacks on Local Vision-Language Models

Eyal Hadad , Mordechai Guri

0 citations

α

Published on arXiv

2603.25403

Output Integrity Attack

OWASP ML Top 10 — ML09

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Unprivileged attacker can reliably fingerprint input geometry via execution time and distinguish between visually dense (medical X-rays) vs sparse (text documents) content via cache profiling on local VLMs

Dual-Layer Side-Channel Attack

Novel technique introduced


On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic High-Resolution preprocessing (e.g., AnyRes) introduces an inherent algorithmic side-channel. Unlike static models, dynamic preprocessing decomposes images into a variable number of patches based on their aspect ratio, creating workload-dependent inputs. We demonstrate a dual-layer attack framework against local VLMs. In Tier 1, an unprivileged attacker can exploit significant execution-time variations using standard unprivileged OS metrics to reliably fingerprint the input's geometry. In Tier 2, by profiling Last-Level Cache (LLC) contention, the attacker can resolve semantic ambiguity within identical geometries, distinguishing between visually dense (e.g., medical X-rays) and sparse (e.g., text documents) content. By evaluating state-of-the-art models such as LLaVA-NeXT and Qwen2-VL, we show that combining these signals enables reliable inference of privacy-sensitive contexts. Finally, we analyze the security engineering trade-offs of mitigating this vulnerability, reveal substantial performance overhead with constant-work padding, and propose practical design recommendations for secure Edge AI deployments.


Key Contributions

  • Identifies algorithmic side-channel in dynamic high-resolution preprocessing (AnyRes) that leaks input geometry through execution time
  • Demonstrates dual-layer attack: Tier 1 infers aspect ratio via timing, Tier 2 distinguishes semantic density via LLC cache contention
  • Evaluates attack on LLaVA-NeXT and Qwen2-VL, showing reliable inference of privacy-sensitive contexts from hardware telemetry

🛡️ Threat Analysis

Output Integrity Attack

The attack infers properties of model inputs (image aspect ratio, visual density, semantic context) by observing side-channel leakage during inference. While not directly tampering with outputs, this threatens input/output confidentiality and integrity of the inference process by revealing what sensitive content is being processed (medical images vs documents). This falls under output integrity threats as it compromises the privacy guarantees of local execution.


Details

Domains
multimodalvision
Model Types
vlmtransformermultimodal
Threat Tags
black_boxinference_timeuntargeted
Applications
on-device vlm inferenceedge aiprivacy-preserving local execution