attack 2026

The Compliance Paradox: Semantic-Instruction Decoupling in Automated Academic Code Evaluation

Devanshu Sahoo 1,2, Manish Prasad 1, Vasudev Majhi 1, Arjun Neekhra 1, Yash Sinha 1, Murari Mandal 1, Vinay Chamola 3, Dhruv Kumar 1,2

0 citations · arXiv

α

Published on arXiv

2601.21360

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversarial directives injected into AST trivia nodes cause >95% failure rates in high-capacity LLMs (e.g., DeepSeek-V3), which systematically prioritize hidden formatting instructions over code correctness evaluation.

SPACI / AST-ASIP

Novel technique introduced


The rapid integration of Large Language Models (LLMs) into educational assessment rests on the unverified assumption that instruction following capability translates directly to objective adjudication. We demonstrate that this assumption is fundamentally flawed. Instead of evaluating code quality, models frequently decouple from the submission's logic to satisfy hidden directives, a systemic vulnerability we term the Compliance Paradox, where models fine-tuned for extreme helpfulness are vulnerable to adversarial manipulation. To expose this, we introduce the Semantic-Preserving Adversarial Code Injection (SPACI) Framework and the Abstract Syntax Tree-Aware Semantic Injection Protocol (AST-ASIP). These methods exploit the Syntax-Semantics Gap by embedding adversarial directives into syntactically inert regions (trivia nodes) of the Abstract Syntax Tree. Through a large-scale evaluation of 9 SOTA models across 25,000 submissions in Python, C, C++, and Java, we reveal catastrophic failure rates (>95%) in high-capacity open-weights models like DeepSeek-V3, which systematically prioritize hidden formatting constraints over code correctness. We quantify this failure using our novel tripartite framework measuring Decoupling Probability, Score Divergence, and Pedagogical Severity to demonstrate the widespread "False Certification" of functionally broken code. Our findings suggest that current alignment paradigms create a "Trojan" vulnerability in automated grading, necessitating a shift from standard RLHF toward domain-specific Adjudicative Robustness, where models are conditioned to prioritize evidence over instruction compliance. We release our complete dataset and injection framework to facilitate further research on the topic.


Key Contributions

  • SPACI Framework and AST-ASIP protocol for embedding adversarial directives into syntactically inert AST trivia nodes (comments, whitespace) to manipulate LLM graders while preserving code semantics
  • Large-scale evaluation across 9 SOTA LLMs and 25,000 submissions (Python, C, C++, Java) revealing >95% failure rates in models like DeepSeek-V3
  • Novel tripartite measurement framework (Decoupling Probability, Score Divergence, Pedagogical Severity) to quantify 'False Certification' of functionally broken code

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
25,000 synthetic code submissions across Python, C, C++, Java
Applications
automated code gradingllm-based educational assessment