attack 2025

Reliability Crisis of Reference-free Metrics for Grammatical Error Correction

Takumi Goto , Yusuke Sakai , Taro Watanabe

0 citations · 24 references · EMNLP

α

Published on arXiv

2509.25961

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversarial GEC systems exploiting metric-specific design vulnerabilities outperform current state-of-the-art GEC systems on all four reference-free metrics on BEA-2019 dev set.


Reference-free evaluation metrics for grammatical error correction (GEC) have achieved high correlation with human judgments. However, these metrics are not designed to evaluate adversarial systems that aim to obtain unjustifiably high scores. The existence of such systems undermines the reliability of automatic evaluation, as it can mislead users in selecting appropriate GEC systems. In this study, we propose adversarial attack strategies for four reference-free metrics: SOME, Scribendi, IMPARA, and LLM-based metrics, and demonstrate that our adversarial systems outperform the current state-of-the-art. These findings highlight the need for more robust evaluation methods.


Key Contributions

  • Proposes metric-specific adversarial attack strategies for four reference-free GEC evaluation metrics: SOME, Scribendi, IMPARA, and LLM-S/LLM-E
  • Demonstrates adversarial GEC outputs outperform current state-of-the-art GEC systems on all four metrics on the BEA-2019 development set
  • Exposes the reliability crisis in reference-free GEC evaluation and motivates development of adversarially robust metrics

🛡️ Threat Analysis

Input Manipulation Attack

The paper crafts adversarial GEC correction outputs designed to exploit known vulnerabilities in ML-based evaluation metrics (SOME, IMPARA, Scribendi), causing them to produce inflated scores at inference time — directly analogous to adversarial input manipulation causing incorrect model outputs.


Details

Domains
nlp
Model Types
transformerllm
Threat Tags
white_boxinference_timetargeted
Datasets
BEA-2019
Applications
grammatical error correction evaluationnlp evaluation metrics