A methodological analysis of prompt perturbations and their effect on attack success rates
Tiago Machado , Maysa Malfiza Garcia de Macedo , Rogerio Abreu de Paula , Marcelo Carpinette Grave , Aminat Adebiyi , Luan Soares de Souza , Enrico Santarelli , Claudio Pinhanez
Published on arXiv
2511.10686
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Statistically significant ASR variation under small prompt perturbations across all three alignment methods, indicating that standard single-configuration benchmarks are insufficient to characterize model robustness.
This work aims to investigate how different Large Language Models (LLMs) alignment methods affect the models' responses to prompt attacks. We selected open source models based on the most common alignment methods, namely, Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning with Human Feedback (RLHF). We conducted a systematic analysis using statistical methods to verify how sensitive the Attack Success Rate (ASR) is when we apply variations to prompts designed to elicit inappropriate content from LLMs. Our results show that even small prompt modifications can significantly change the Attack Success Rate (ASR) according to the statistical tests we run, making the models more or less susceptible to types of attack. Critically, our results demonstrate that running existing 'attack benchmarks' alone may not be sufficient to elicit all possible vulnerabilities of both models and alignment methods. This paper thus contributes to ongoing efforts on model attack evaluation by means of systematic and statistically-based analyses of the different alignment methods and how sensitive their ASR is to prompt variation.
Key Contributions
- Systematic statistical analysis (significance testing) of ASR sensitivity to prompt perturbations across SFT, DPO, and RLHF alignment methods
- Demonstrates that even small prompt modifications can produce statistically significant changes in ASR, challenging the reliability of single-prompt attack benchmarks
- Highlights that existing attack benchmark protocols (AdvBench, HarmBench) may underestimate the range of vulnerabilities in aligned LLMs