attack 2025

Character-Level Perturbations Disrupt LLM Watermarks

Zhaoxi Zhang 1,2, Xiaomei Zhang 2, Yanjun Zhang 1, He Zhang 3, Shirui Pan 2, Bo Liu 1, Asif Qumer Gill 1, Leo Yu Zhang 2

0 citations · NDSS

α

Published on arXiv

2509.09112

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Character-level perturbations outperform token-level and sentence-level removal attacks under the most restrictive threat model (no detector queries), and the GA-guided attack with limited black-box queries defeats five watermarking schemes across two LLMs

GA-guided character-level watermark removal

Novel technique introduced


Large Language Model (LLM) watermarking embeds detectable signals into generated text for copyright protection, misuse prevention, and content detection. While prior studies evaluate robustness using watermark removal attacks, these methods are often suboptimal, creating the misconception that effective removal requires large perturbations or powerful adversaries. To bridge the gap, we first formalize the system model for LLM watermark, and characterize two realistic threat models constrained on limited access to the watermark detector. We then analyze how different types of perturbation vary in their attack range, i.e., the number of tokens they can affect with a single edit. We observe that character-level perturbations (e.g., typos, swaps, deletions, homoglyphs) can influence multiple tokens simultaneously by disrupting the tokenization process. We demonstrate that character-level perturbations are significantly more effective for watermark removal under the most restrictive threat model. We further propose guided removal attacks based on the Genetic Algorithm (GA) that uses a reference detector for optimization. Under a practical threat model with limited black-box queries to the watermark detector, our method demonstrates strong removal performance. Experiments confirm the superiority of character-level perturbations and the effectiveness of the GA in removing watermarks under realistic constraints. Additionally, we argue there is an adversarial dilemma when considering potential defenses: any fixed defense can be bypassed by a suitable perturbation strategy. Motivated by this principle, we propose an adaptive compound character-level attack. Experimental results show that this approach can effectively defeat the defenses. Our findings highlight significant vulnerabilities in existing LLM watermark schemes and underline the urgency for the development of new robust mechanisms.


Key Contributions

  • Demonstrates that character-level perturbations (typos, swaps, deletions, homoglyphs) disrupt tokenization to influence multiple tokens per edit, making them significantly more effective at watermark removal than token-level or sentence-level perturbations under restrictive threat models
  • Proposes a Genetic Algorithm-guided watermark removal attack that operates under limited black-box queries to a reference detector, achieving strong removal performance under realistic adversary constraints
  • Proposes an adaptive compound character-level attack that bypasses fixed defenses, empirically demonstrating an adversarial dilemma where any static defense can be circumvented by a suitable perturbation strategy

🛡️ Threat Analysis

Output Integrity Attack

The paper directly attacks content watermarks embedded in LLM-generated text outputs — watermark removal is a canonical ML09 attack on output integrity and content provenance. Character-level perturbations (typos, homoglyphs, swaps) disrupt tokenization to erase watermark signals, and the GA-guided adaptive attack defeats proposed defenses.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
Five representative LLM watermarking schemes (including KGW/Kirchenbauer)Two widely-used LLMs (unspecified in excerpt)
Applications
llm text watermarkingai-generated text detectioncontent attribution