Latest papers

1 papers
benchmark arXiv Dec 1, 2025 · Dec 2025

Securing Large Language Models (LLMs) from Prompt Injection Attacks

Omar Farooq Khan Suri, John McCrae · arXiv · University of Galway

Evaluates JATMO fine-tuning defense against HOUYI genetic prompt injection attacks on LLaMA 2 and Qwen, finding incomplete protection

Prompt Injection nlp
PDF