benchmark 2026

Lost in Translation? A Comparative Study on the Cross-Lingual Transfer of Composite Harms

Vaibhav Shukla 1, Hardik Sharma 2, Adith N Reganti 1, Soham Wasmatkar 2, Bagesh Kumar 2, Vrijendra Singh 1

0 citations · 18 references · arXiv (Cornell University)

α

Published on arXiv

2602.07963

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Attack success rates rise sharply in Indic languages versus English, with adversarial-syntax prompts showing the most severe alignment degradation across all tested LLMs.

CompositeHarm

Novel technique introduced


Most safety evaluations of large language models (LLMs) remain anchored in English. Translation is often used as a shortcut to probe multilingual behavior, but it rarely captures the full picture, especially when harmful intent or structure morphs across languages. Some types of harm survive translation almost intact, while others distort or disappear. To study this effect, we introduce CompositeHarm, a translation-based benchmark designed to examine how safety alignment holds up as both syntax and semantics shift. It combines two complementary English datasets, AttaQ, which targets structured adversarial attacks, and MMSafetyBench, which covers contextual, real-world harms, and extends them into six languages: English, Hindi, Assamese, Marathi, Kannada, and Gujarati. Using three large models, we find that attack success rates rise sharply in Indic languages, especially under adversarial syntax, while contextual harms transfer more moderately. To ensure scalability and energy efficiency, our study adopts lightweight inference strategies inspired by edge-AI design principles, reducing redundant evaluation passes while preserving cross-lingual fidelity. This design makes large-scale multilingual safety testing both computationally feasible and environmentally conscious. Overall, our results show that translated benchmarks are a necessary first step, but not a sufficient one, toward building grounded, resource-aware, language-adaptive safety systems.


Key Contributions

  • CompositeHarm benchmark combining AttaQ (adversarial syntax) and MMSafetyBench (contextual harms) extended to five Indic languages beyond English
  • Empirical finding that attack success rates rise sharply in Indic languages, with adversarial syntax being the most persistent cross-lingual failure mode
  • Lightweight, edge-AI-inspired evaluation pipeline that reduces redundant inference passes for scalable multilingual safety testing

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
AttaQMMSafetyBenchCompositeHarm
Applications
multilingual llm safety evaluationjailbreak robustness testing