benchmark 2025

Sure! Here's a short and concise title for your paper: "Contamination in Generated Text Detection Benchmarks"

Philipp Dingfelder , Christian Riess

0 citations · International Conference on Cy...

α

Published on arXiv

2511.09200

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Benchmark contamination artifacts in 98.5% of Claude-LLM data cause detectors to learn shortcuts, facilitating spoofing attacks; data cleansing makes such attacks more difficult.


Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as "Sure! Here is the academic article abstract:", or instances where the LLM rejects the prompted task. In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult. The reprocessed dataset is publicly available.


Key Contributions

  • Identifies that 98.5% of Claude-LLM entries in the DetectRL benchmark contain artifacts (LLM-typical preambles and rejection phrases) that trained detectors exploit as shortcuts
  • Demonstrates that shortcut-reliant detectors are vulnerable to spoofing attacks when adversaries avoid these patterns
  • Releases a reprocessed/cleansed version of the DetectRL dataset that reduces direct spoofing attack effectiveness

🛡️ Threat Analysis

Output Integrity Attack

The paper directly concerns the integrity and reliability of AI-generated text detectors: benchmark artifacts create shortcuts that allow adversaries to spoof (evade) trained detectors by avoiding the contamination patterns, undermining output integrity of AI-text detection systems.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
DetectRL
Applications
ai-generated text detection