survey 2025

Breaking to Build: A Threat Model of Prompt-Based Attacks for Securing LLMs

Brennen Hill , Surendra Parla , Venkata Abhijeeth Balabhadruni , Atharv Prajod Padmalayam , Sujay Chandra Shekara Sharma

0 citations

α

Published on arXiv

2509.04615

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Establishes a comprehensive taxonomy of prompt-based LLM attack vectors — including HOUYI (compromising 36 real-world LLM apps) and role-playing/encoding-based jailbreaks — as a foundation for designing inherently attack-resistant LLMs.


The proliferation of Large Language Models (LLMs) has introduced critical security challenges, where adversarial actors can manipulate input prompts to cause significant harm and circumvent safety alignments. These prompt-based attacks exploit vulnerabilities in a model's design, training, and contextual understanding, leading to intellectual property theft, misinformation generation, and erosion of user trust. A systematic understanding of these attack vectors is the foundational step toward developing robust countermeasures. This paper presents a comprehensive literature survey of prompt-based attack methodologies, categorizing them to provide a clear threat model. By detailing the mechanisms and impacts of these exploits, this survey aims to inform the research community's efforts in building the next generation of secure LLMs that are inherently resistant to unauthorized distillation, fine-tuning, and editing.


Key Contributions

  • Systematic categorization of prompt-based LLM attacks into direct injection, indirect injection, and adversarial prompt crafting with detailed mechanism analysis
  • Structured threat model covering competing-objectives exploitation, mismatched generalization, cognitive hacking/role-playing, and hidden indirect injection vectors
  • Survey framing toward building LLMs resistant to unauthorized distillation, fine-tuning, and editing as a design objective

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
large language modelsllm-integrated applicationsai assistants