defense 2025

Early Approaches to Adversarial Fine-Tuning for Prompt Injection Defense: A 2022 Study of GPT-3 and Contemporary Models

Gustavo Sandoval , Denys Fenchenko , Junyao Chen

0 citations

α

Published on arXiv

2509.14271

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversarial Fine-Tuning reduces prompt injection attack success from 31% to near zero on smaller GPT-3 variants, while larger models remain more vulnerable

Adversarial Fine-Tuning

Novel technique introduced


This paper documents early research conducted in 2022 on defending against prompt injection attacks in large language models, providing historical context for the evolution of this critical security domain. This research focuses on two adversarial attacks against Large Language Models (LLMs): prompt injection and goal hijacking. We examine how to construct these attacks, test them on various LLMs, and compare their effectiveness. We propose and evaluate a novel defense technique called Adversarial Fine-Tuning. Our results show that, without this defense, the attacks succeeded 31\% of the time on GPT-3 series models. When using our Adversarial Fine-Tuning approach, attack success rates were reduced to near zero for smaller GPT-3 variants (Ada, Babbage, Curie), though we note that subsequent research has revealed limitations of fine-tuning-based defenses. We also find that more flexible models exhibit greater vulnerability to these attacks. Consequently, large models such as GPT-3 Davinci are more vulnerable than smaller models like GPT-2. While the specific models tested are now superseded, the core methodology and empirical findings contributed to the foundation of modern prompt injection defense research, including instruction hierarchy systems and constitutional AI approaches.


Key Contributions

  • Empirical characterization of prompt injection and goal hijacking attacks on GPT-3 series models, showing 31% baseline success rate
  • Adversarial Fine-Tuning defense using structured delimiters and adversarial examples, reducing attack success to near zero on Ada, Babbage, and Curie variants
  • Finding that larger, more flexible models (GPT-3 Davinci) are more vulnerable to prompt injection than smaller models (GPT-2)

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timeblack_box
Datasets
GPT-3 (Ada, Babbage, Curie, Davinci)GPT-2
Applications
prompt-based nlp applicationstext generationsentiment analysis