Obscuring Data Contamination Through Translation: Evidence from Arabic Corpora
Chaymaa Abbas , Nour Shamaa , Mariette Awad
Published on arXiv
2601.14994
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
Arabic translation suppresses TS-Guessing contamination signals yet contaminated models still score higher; Translation-Aware Contamination Detection reliably exposes this cross-lingual memorization where English-only methods fail.
Translation-Aware Contamination Detection
Novel technique introduced
Data contamination undermines the validity of Large Language Model evaluation by enabling models to rely on memorized benchmark content rather than true generalization. While prior work has proposed contamination detection methods, these approaches are largely limited to English benchmarks, leaving multilingual contamination poorly understood. In this work, we investigate contamination dynamics in multilingual settings by fine-tuning several open-weight LLMs on varying proportions of Arabic datasets and evaluating them on original English benchmarks. To detect memorization, we extend the Tested Slot Guessing method with a choice-reordering strategy and incorporate Min-K% probability analysis, capturing both behavioral and distributional contamination signals. Our results show that translation into Arabic suppresses conventional contamination indicators, yet models still benefit from exposure to contaminated data, particularly those with stronger Arabic capabilities. This effect is consistently reflected in rising Mink% scores and increased cross-lingual answer consistency as contamination levels grow. To address this blind spot, we propose Translation-Aware Contamination Detection, which identifies contamination by comparing signals across multiple translated benchmark variants rather than English alone. The Translation-Aware Contamination Detection reliably exposes contamination even when English-only methods fail. Together, our findings highlight the need for multilingual, translation-aware evaluation pipelines to ensure fair, transparent, and reproducible assessment of LLMs.
Key Contributions
- Demonstrates that translating benchmark data into Arabic suppresses conventional contamination indicators (TS-Guessing) while contaminated models still gain performance benefits, especially those with stronger Arabic capabilities
- Extends TS-Guessing with a choice-reordering strategy and integrates Min-K% probability analysis to capture cross-lingual memorization signals that English-only methods miss
- Proposes Translation-Aware Contamination Detection, which identifies contamination by comparing membership inference signals across multiple translated benchmark variants rather than English alone
🛡️ Threat Analysis
Contamination detection is operationally identical to membership inference — determining whether specific benchmark data points were in the LLM's training set. The paper extends Min-K% and TS-Guessing membership inference methods to multilingual settings and proposes Translation-Aware Contamination Detection, a new framework for inferring training-set membership across language boundaries.