defense 2026

AGMark: Attention-Guided Dynamic Watermarking for Large Vision-Language Models

Yue Li 1, Xin Yi 1, Dongsheng Shi 1, Yongyi Cui 1, Gerard de Melo 2,3, Linlin Wang 1

0 citations · 32 references · arXiv (Cornell University)

α

Published on arXiv

2602.09611

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

AGMark achieves at least 99.36% AUC detection and 88.61% AUC under attacks while substantially improving visual semantic fidelity over prior LVLM watermarking methods, especially in long-form generation.

AGMark

Novel technique introduced


Watermarking has emerged as a pivotal solution for content traceability and intellectual property protection in Large Vision-Language Models (LVLMs). However, vision-agnostic watermarks may introduce visually irrelevant tokens and disrupt visual grounding by enforcing indiscriminate pseudo-random biases. Additionally, current vision-specific watermarks rely on a static, one-time estimation of vision critical weights and ignore the weight distribution density when determining the proportion of protected tokens. This design fails to account for dynamic changes in visual dependence during generation and may introduce low-quality tokens in the long tail. To address these challenges, we propose Attention-Guided Dynamic Watermarking (AGMark), a novel framework that embeds detectable signals while strictly preserving visual fidelity. At each decoding step, AGMark first dynamically identifies semantic-critical evidence based on attention weights for visual relevance, together with context-aware coherence cues, resulting in a more adaptive and well-calibrated evidence-weight distribution. It then determines the proportion of semantic-critical tokens by jointly considering uncertainty awareness (token entropy) and evidence calibration (weight density), thereby enabling adaptive vocabulary partitioning to avoid irrelevant tokens. Empirical results confirm that AGMark outperforms conventional methods, observably improving generation quality and yielding particularly strong gains in visual semantic fidelity in the later stages of generation. The framework maintains highly competitive detection accuracy (at least 99.36\% AUC) and robust attack resilience (at least 88.61\% AUC) without sacrificing inference efficiency, effectively establishing a new standard for reliability-preserving multi-modal watermarking.


Key Contributions

  • Dynamic, per-decoding-step identification of semantic-critical tokens via attention weights and context-aware coherence cues, replacing static one-time estimation
  • Adaptive vocabulary partitioning jointly conditioned on token entropy (uncertainty awareness) and evidence weight density, avoiding low-quality tail tokens
  • Achieves ≥99.36% AUC detection accuracy and ≥88.61% AUC under attack while improving visual semantic fidelity over prior vision-specific and vision-agnostic watermarking baselines

🛡️ Threat Analysis

Output Integrity Attack

AGMark embeds watermarks in the TEXT OUTPUTS of Large Vision-Language Models (not model weights) to enable content traceability and source attribution — this is output content watermarking for provenance, the canonical ML09 use case. The paper also evaluates resilience against watermark removal attacks (88.61% AUC under attack), further solidifying ML09.


Details

Domains
nlpmultimodalvision
Model Types
vlmllmtransformer
Threat Tags
inference_time
Datasets
OwlEvalMMBenchCOCO
Applications
vision-language model content traceabilityai-generated content attributionintellectual property protection for lvlms