attack 2026

GradingAttack: Attacking Large Language Models Towards Short Answer Grading Ability

Xueyi Li 1,2, Zhuoneng Zhou 1,2, Zitao Liu 1,2, Yongdong Wu 1,2, Weiqi Luo 1,2

0 citations · 34 references · arXiv (Cornell University)

α

Published on arXiv

2602.00979

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Both attack strategies successfully mislead LLM-based graders; prompt-level attacks achieve higher success rates and token-level attacks exhibit superior camouflage, exposing fairness risks in automated educational assessment.

GradingAttack

Novel technique introduced


Large language models (LLMs) have demonstrated remarkable potential for automatic short answer grading (ASAG), significantly boosting student assessment efficiency and scalability in educational scenarios. However, their vulnerability to adversarial manipulation raises critical concerns about automatic grading fairness and reliability. In this paper, we introduce GradingAttack, a fine-grained adversarial attack framework that systematically evaluates the vulnerability of LLM based ASAG models. Specifically, we align general-purpose attack methods with the specific objectives of ASAG by designing token-level and prompt-level strategies that manipulate grading outcomes while maintaining high camouflage. Furthermore, to quantify attack camouflage, we propose a novel evaluation metric that balances attack success and camouflage. Experiments on multiple datasets demonstrate that both attack strategies effectively mislead grading models, with prompt-level attacks achieving higher success rates and token-level attacks exhibiting superior camouflage capability. Our findings underscore the need for robust defenses to ensure fairness and reliability in ASAG. Our code and datasets are available at https://anonymous.4open.science/r/GradingAttack.


Key Contributions

  • GradingAttack framework with two complementary attack strategies — gradient-based token-level (white-box) and natural language prompt-level (black-box) — adapted to the ASAG grading context
  • Novel camouflage-aware evaluation metric that jointly quantifies attack success rate and degree of obfuscation
  • Empirical finding that prompt-level attacks achieve higher success rates while token-level attacks provide superior camouflage on math/science grading datasets

🛡️ Threat Analysis

Input Manipulation Attack

Token-level strategy uses gradient-based adversarial perturbations at inference time to manipulate LLM grading outputs — classic white-box input manipulation attack.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeteddigital
Datasets
ASAG math/science datasets (multiple, unspecified in excerpt)
Applications
automatic short answer gradingeducational assessment