α

Published on arXiv

2508.01554

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Component-wise prompt perturbation (ComPerturb) achieves state-of-the-art attack success rates across five LLMs, with certain components (e.g., directives) being significantly more vulnerable than others, demonstrating that prompts are not value-neutral.

ComPerturb

Novel technique introduced


Prompt-based adversarial attacks have become an effective means to assess the robustness of large language models (LLMs). However, existing approaches often treat prompts as monolithic text, overlooking their structural heterogeneity-different prompt components contribute unequally to adversarial robustness. Prior works like PromptRobust assume prompts are value-neutral, but our analysis reveals that complex, domain-specific prompts with rich structures have components with differing vulnerabilities. To address this gap, we introduce PromptAnatomy, an automated framework that dissects prompts into functional components and generates diverse, interpretable adversarial examples by selectively perturbing each component using our proposed method, ComPerturb. To ensure linguistic plausibility and mitigate distribution shifts, we further incorporate a perplexity (PPL)-based filtering mechanism. As a complementary resource, we annotate four public instruction-tuning datasets using the PromptAnatomy framework, verified through human review. Extensive experiments across these datasets and five advanced LLMs demonstrate that ComPerturb achieves state-of-the-art attack success rates. Ablation studies validate the complementary benefits of prompt dissection and PPL filtering. Our results underscore the importance of prompt structure awareness and controlled perturbation for reliable adversarial robustness evaluation in LLMs. Code and data are available at https://github.com/Yujiaaaaa/PACP.


Key Contributions

  • PromptAnatomy: the first automated framework for decomposing LLM prompts into canonical functional components (directive, role, examples, output formatting, additional information), enabling fine-grained adversarial analysis
  • ComPerturb: a component-wise perturbation method with PPL-based filtering that applies targeted semantic and syntactic adversarial strategies to individual prompt components, achieving state-of-the-art attack success rates across five LLMs
  • Four annotated domain-specific prompt datasets (PubMedQA-PA, EMEA-PA, Leetcode-PA, CodeGeneration-PA) with human-verified structural component labels revealing which components are most adversarially vulnerable

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeteddigital
Datasets
PubMedQA-PAEMEA-PALeetcode-PACodeGeneration-PA
Applications
llm robustness evaluationinstruction followingmedical qacode generation