Authorship Attribution in Multilingual Machine-Generated Texts
Lucio La Cava 1, Dominik Macko 2, Róbert Móro 2, Ivan Srba 2, Andrea Tagarelli 1
Published on arXiv
2508.01656
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Monolingual AA methods can be partially adapted to multilingual settings, but cross-lingual transfer across diverse language families remains severely limited, especially for non-Latin scripts.
As Large Language Models (LLMs) have reached human-like fluency and coherence, distinguishing machine-generated text (MGT) from human-written content becomes increasingly difficult. While early efforts in MGT detection have focused on binary classification, the growing landscape and diversity of LLMs require a more fine-grained yet challenging authorship attribution (AA), i.e., being able to identify the precise generator (LLM or human) behind a text. However, AA remains nowadays confined to a monolingual setting, with English being the most investigated one, overlooking the multilingual nature and usage of modern LLMs. In this work, we introduce the problem of Multilingual Authorship Attribution, which involves attributing texts to human or multiple LLM generators across diverse languages. Focusing on 18 languages -- covering multiple families and writing scripts -- and 8 generators (7 LLMs and the human-authored class), we investigate the multilingual suitability of monolingual AA methods, their cross-lingual transferability, and the impact of generators on attribution performance. Our results reveal that while certain monolingual AA methods can be adapted to multilingual settings, significant limitations and challenges remain, particularly in transferring across diverse language families, underscoring the complexity of multilingual AA and the need for more robust approaches to better match real-world scenarios.
Key Contributions
- Formalizes the Multilingual Authorship Attribution problem across 18 languages and 8 generators (7 LLMs + human class)
- Evaluates the multilingual suitability and cross-lingual transferability of existing monolingual AA methods
- Reveals that cross-family language transfer remains a significant unsolved challenge for AA methods
🛡️ Threat Analysis
The paper's core task is fine-grained attribution of AI-generated text to specific LLM generators — a direct instance of AI-generated content detection and output provenance, which is the heart of ML09. Unlike a single-language domain application, the systematic study of cross-lingual transferability across 18 language families and 8 generators constitutes a benchmark contribution to the forensic detection problem.