attack 2026

Do Not Leave a Gap: Hallucination-Free Object Concealment in Vision-Language Models

Amira Guesmi , Muhammad Shafique

0 citations

α

Published on arXiv

2603.15940

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Effectively conceals target objects while preserving up to 86% of non-target objects and reducing grounded hallucination by up to 3× compared to attention-suppression-based attacks

Background-Consistent Object Concealment

Novel technique introduced


Vision-language models (VLMs) have recently shown remarkable capabilities in visual understanding and generation, but remain vulnerable to adversarial manipulations of visual content. Prior object-hiding attacks primarily rely on suppressing or blocking region-specific representations, often creating semantic gaps that inadvertently induce hallucination, where models invent plausible but incorrect objects. In this work, we demonstrate that hallucination arises not from object absence per se, but from semantic discontinuity introduced by such suppression-based attacks. We propose a new class of \emph{background-consistent object concealment} attacks, which hide target objects by re-encoding their visual representations to be statistically and semantically consistent with surrounding background regions. Crucially, our approach preserves token structure and attention flow, avoiding representational voids that trigger hallucination. We present a pixel-level optimization framework that enforces background-consistent re-encoding across multiple transformer layers while preserving global scene semantics. Extensive experiments on state-of-the-art vision-language models show that our method effectively conceals target objects while preserving up to $86\%$ of non-target objects and reducing grounded hallucination by up to $3\times$ compared to attention-suppression-based attacks.


Key Contributions

  • Background-consistent object concealment attack that avoids semantic gaps and hallucination by re-encoding object representations to match surrounding regions
  • Pixel-level optimization framework that preserves token structure and attention flow across transformer layers
  • Demonstrates 86% preservation of non-target objects and 3× reduction in hallucination compared to attention-suppression attacks

🛡️ Threat Analysis

Input Manipulation Attack

Pixel-level adversarial optimization that manipulates visual inputs to VLMs, causing the model to fail to detect target objects at inference time — this is an evasion attack via input manipulation.


Details

Domains
visionmultimodal
Model Types
vlmtransformermultimodal
Threat Tags
white_boxinference_timetargeteddigital
Applications
vision-language modelsvisual question answeringimage captioning