Span-level Detection of AI-generated Scientific Text via Contrastive Learning and Structural Calibration
Zhen Yin 1,2, Shenghua Wang 1,2
Published on arXiv
2510.00890
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves F1(AI) of 80.17, AUROC of 92.63, and Span-F1 of 74.36 on a 100K cross-disciplinary dataset, with strong resilience under adversarial rewriting across IMRaD sections.
Sci-SpanDet
Novel technique introduced
The rapid adoption of large language models (LLMs) in scientific writing raises serious concerns regarding authorship integrity and the reliability of scholarly publications. Existing detection approaches mainly rely on document-level classification or surface-level statistical cues; however, they neglect fine-grained span localization, exhibit weak calibration, and often fail to generalize across disciplines and generators. To address these limitations, we present Sci-SpanDet, a structure-aware framework for detecting AI-generated scholarly texts. The proposed method combines section-conditioned stylistic modeling with multi-level contrastive learning to capture nuanced human-AI differences while mitigating topic dependence, thereby enhancing cross-domain robustness. In addition, it integrates BIO-CRF sequence labeling with pointer-based boundary decoding and confidence calibration to enable precise span-level detection and reliable probability estimates. Extensive experiments on a newly constructed cross-disciplinary dataset of 100,000 annotated samples generated by multiple LLM families (GPT, Qwen, DeepSeek, LLaMA) demonstrate that Sci-SpanDet achieves state-of-the-art performance, with F1(AI) of 80.17, AUROC of 92.63, and Span-F1 of 74.36. Furthermore, it shows strong resilience under adversarial rewriting and maintains balanced accuracy across IMRaD sections and diverse disciplines, substantially surpassing existing baselines. To ensure reproducibility and to foster further research on AI-generated text detection in scholarly documents, the curated dataset and source code will be publicly released upon publication.
Key Contributions
- Writing-style graph with multi-level contrastive learning conditioned on IMRaD section structure, improving cross-generator and cross-discipline generalization
- Span-level detection via BIO-CRF sequence labeling combined with QA-style pointer-based boundary decoding and confidence calibration for reliable probability estimates
- Cross-disciplinary benchmark of 100,000 annotated samples from GPT, Qwen, DeepSeek, and LLaMA families for scientific AI-text detection
🛡️ Threat Analysis
Directly proposes a novel AI-generated content detection framework targeting output integrity — detecting LLM-generated text in scholarly documents at fine-grained span level. The primary contribution is the detection architecture (writing-style graph, section-conditioned contrastive learning, pointer-based boundary decoding), which constitutes novel forensic/detection methodology for AI-generated content rather than a domain application of existing methods.