John McCrae

h-index: 4 64 citations 10 papers (total)

Papers in Database (1)

benchmark arXiv Dec 1, 2025 · Dec 2025

Securing Large Language Models (LLMs) from Prompt Injection Attacks

Omar Farooq Khan Suri, John McCrae · arXiv · University of Galway

Evaluates JATMO fine-tuning defense against HOUYI genetic prompt injection attacks on LLaMA 2 and Qwen, finding incomplete protection

Prompt Injection nlp
PDF