tool 2026

On the Effectiveness of LLM-Specific Fine-Tuning for Detecting AI-Generated Text

Michał Gromadzki 1,2, Anna Wróblewska 1, Agnieszka Kaliska 3

0 citations · 20 references · arXiv

α

Published on arXiv

2601.20006

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

LLM-specific fine-tuning achieves up to 99.6% token-level accuracy on a 100M-token benchmark spanning 21 LLMs, substantially outperforming existing open-source baselines.

Per LLM / Per LLM Family Fine-Tuning

Novel technique introduced


The rapid progress of large language models has enabled the generation of text that closely resembles human writing, creating challenges for authenticity verification in education, publishing, and digital security. Detecting AI-generated text has therefore become a crucial technical and ethical issue. This paper presents a comprehensive study of AI-generated text detection based on large-scale corpora and novel training strategies. We introduce a 1-billion-token corpus of human-authored texts spanning multiple genres and a 1.9-billion-token corpus of AI-generated texts produced by prompting a variety of LLMs across diverse domains. Using these resources, we develop and evaluate numerous detection models and propose two novel training paradigms: Per LLM and Per LLM family fine-tuning. Across a 100-million-token benchmark covering 21 large language models, our best fine-tuned detector achieves up to $99.6\%$ token-level accuracy, substantially outperforming existing open-source baselines.


Key Contributions

  • 1-billion-token human corpus and 1.9-billion-token AI-generated corpus spanning 21 LLMs and diverse domains
  • Two novel training paradigms — Per LLM and Per LLM family fine-tuning — enabling LLM-source-aware detector specialization
  • 100-million-token evaluation benchmark with best detector achieving 99.6% token-level accuracy, outperforming open-source baselines

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated content detection — a core ML09 concern — by proposing novel Per LLM and Per LLM family fine-tuning paradigms, building large-scale corpora, and evaluating detectors to verify authenticity of model-generated text outputs.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
Custom 1B-token human corpusCustom 1.9B-token AI-generated corpus100M-token evaluation benchmark (21 LLMs)
Applications
ai-generated text detectionacademic integritycontent authenticity verificationdigital publishing