benchmark 2025

Security and Detectability Analysis of Unicode Text Watermarking Methods Against Large Language Models

Malte Hellmeier

0 citations · 46 references · International Conference on In...

α

Published on arXiv

2512.13325

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Reasoning LLMs (e.g., GPT-5, Gemini 2.5 Pro) can detect that Unicode text is watermarked, but no model successfully extracts the watermark without access to implementation source code.


Securing digital text is becoming increasingly relevant due to the widespread use of large language models. Individuals' fear of losing control over data when it is being used to train such machine learning models or when distinguishing model-generated output from text written by humans. Digital watermarking provides additional protection by embedding an invisible watermark within the data that requires protection. However, little work has been taken to analyze and verify if existing digital text watermarking methods are secure and undetectable by large language models. In this paper, we investigate the security-related area of watermarking and machine learning models for text data. In a controlled testbed of three experiments, ten existing Unicode text watermarking methods were implemented and analyzed across six large language models: GPT-5, GPT-4o, Teuken 7B, Llama 3.3, Claude Sonnet 4, and Gemini 2.5 Pro. The findings of our experiments indicate that, especially the latest reasoning models, can detect a watermarked text. Nevertheless, all models fail to extract the watermark unless implementation details in the form of source code are provided. We discuss the implications for security researchers and practitioners and outline future research opportunities to address security concerns.


Key Contributions

  • Implements and evaluates 10 Unicode text watermarking methods against 6 state-of-the-art LLMs for detectability and watermark extraction capability
  • Finds that latest reasoning LLMs can detect the presence of watermarks but fail to extract the watermark without access to implementation details (source code)
  • Provides concrete recommendations for improving text watermarking security against LLM-based adversaries

🛡️ Threat Analysis

Output Integrity Attack

Directly analyzes the security of content watermarking methods for text — specifically whether LLMs can detect or circumvent Unicode-based text watermarks, which is a content integrity/provenance attack surface. The paper benchmarks 10 watermarking schemes against LLM-based adversarial detection.


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
text watermarkingcontent provenanceai-generated text detection