Are All Prompt Components Value-Neutral? Understanding the Heterogeneous Adversarial Robustness of Dissected Prompt in Large Language Models
Yujia Zheng 1,2, Tianhao Li 1,2, Haotian Huang , Tianyu Zeng 3, Jingyu Lu 4, Chuangxin Chu 5, Yuekai Huang 6,7, Ziyou Jiang 6,7, Qian Xiong 8, Yuyao Ge 7,9, Mingyang Li 6,7
2 North China University of Technology
3 Hong Kong Polytechnic University
4 Australian National University
5 Nanyang Technological University
6 Institute of Software, Chinese Academy of Sciences
7 University of Chinese Academy of Sciences
9 Institute of Computing Technology, Chinese Academy of Sciences
Published on arXiv
2508.01554
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Component-wise prompt perturbation (ComPerturb) achieves state-of-the-art attack success rates across five LLMs, with certain components (e.g., directives) being significantly more vulnerable than others, demonstrating that prompts are not value-neutral.
ComPerturb
Novel technique introduced
Prompt-based adversarial attacks have become an effective means to assess the robustness of large language models (LLMs). However, existing approaches often treat prompts as monolithic text, overlooking their structural heterogeneity-different prompt components contribute unequally to adversarial robustness. Prior works like PromptRobust assume prompts are value-neutral, but our analysis reveals that complex, domain-specific prompts with rich structures have components with differing vulnerabilities. To address this gap, we introduce PromptAnatomy, an automated framework that dissects prompts into functional components and generates diverse, interpretable adversarial examples by selectively perturbing each component using our proposed method, ComPerturb. To ensure linguistic plausibility and mitigate distribution shifts, we further incorporate a perplexity (PPL)-based filtering mechanism. As a complementary resource, we annotate four public instruction-tuning datasets using the PromptAnatomy framework, verified through human review. Extensive experiments across these datasets and five advanced LLMs demonstrate that ComPerturb achieves state-of-the-art attack success rates. Ablation studies validate the complementary benefits of prompt dissection and PPL filtering. Our results underscore the importance of prompt structure awareness and controlled perturbation for reliable adversarial robustness evaluation in LLMs. Code and data are available at https://github.com/Yujiaaaaa/PACP.
Key Contributions
- PromptAnatomy: the first automated framework for decomposing LLM prompts into canonical functional components (directive, role, examples, output formatting, additional information), enabling fine-grained adversarial analysis
- ComPerturb: a component-wise perturbation method with PPL-based filtering that applies targeted semantic and syntactic adversarial strategies to individual prompt components, achieving state-of-the-art attack success rates across five LLMs
- Four annotated domain-specific prompt datasets (PubMedQA-PA, EMEA-PA, Leetcode-PA, CodeGeneration-PA) with human-verified structural component labels revealing which components are most adversarially vulnerable