survey arXiv Sep 4, 2025 · Sep 2025
Brennen Hill, Surendra Parla, Venkata Abhijeeth Balabhadruni et al. · University of Wisconsin-Madison
Surveys and categorizes prompt-based LLM attack methodologies — injection, jailbreaking, adversarial prompting — to establish a structured threat model
Prompt Injection nlp
The proliferation of Large Language Models (LLMs) has introduced critical security challenges, where adversarial actors can manipulate input prompts to cause significant harm and circumvent safety alignments. These prompt-based attacks exploit vulnerabilities in a model's design, training, and contextual understanding, leading to intellectual property theft, misinformation generation, and erosion of user trust. A systematic understanding of these attack vectors is the foundational step toward developing robust countermeasures. This paper presents a comprehensive literature survey of prompt-based attack methodologies, categorizing them to provide a clear threat model. By detailing the mechanisms and impacts of these exploits, this survey aims to inform the research community's efforts in building the next generation of secure LLMs that are inherently resistant to unauthorized distillation, fine-tuning, and editing.
llm transformer University of Wisconsin-Madison