defense 2026

CodeGuard: Improving LLM Guardrails in CS Education

Nishat Raihan 1, Noah Erdachew 2, Jayoti Devi 1, Joanna C. S. Santos 3, Marcos Zampieri 1

0 citations · 61 references · arXiv

α

Published on arXiv

2602.02509

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PromptShield achieves 0.93 F1 on unsafe prompt detection and reduces harmful code completions by 30–65% in educational LLM systems

PromptShield

Novel technique introduced


Large language models (LLMs) are increasingly embedded in Computer Science (CS) classrooms to automate code generation, feedback, and assessment. However, their susceptibility to adversarial or ill-intentioned prompts threatens student learning and academic integrity. To cope with this important issue, we evaluate existing off-the-shelf LLMs in handling unsafe and irrelevant prompts within the domain of CS education. We identify important shortcomings in existing LLM guardrails which motivates us to propose CodeGuard, a comprehensive guardrail framework for educational AI systems. CodeGuard includes (i) a first-of-its-kind taxonomy for classifying prompts; (ii) the CodeGuard dataset, a collection of 8,000 prompts spanning the taxonomy; and (iii) PromptShield, a lightweight sentence-encoder model fine-tuned to detect unsafe prompts in real time. Experiments show that PromptShield achieves 0.93 F1 score, surpassing existing guardrail methods. Additionally, further experimentation reveals that CodeGuard reduces potentially harmful or policy-violating code completions by 30-65% without degrading performance on legitimate educational tasks. The code, datasets, and evaluation scripts are made freely available to the community.


Key Contributions

  • A domain-specific taxonomy for classifying unsafe and irrelevant prompts in CS education, paired with a dataset of 8,000 labeled prompts
  • PromptShield, a lightweight fine-tuned sentence-encoder model achieving 0.93 F1 for real-time unsafe prompt detection, outperforming existing guardrail methods
  • CodeGuard framework that reduces harmful or policy-violating LLM code completions by 30–65% without degrading performance on legitimate educational tasks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
CodeGuard dataset (8,000 prompts)Do-Not-Code
Applications
cs educationcode generationeducational ai assistants