defense 2025

PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization

Huseein Jawad 1, Nicolas Brunel 2,3,4

0 citations · 30 references · arXiv

α

Published on arXiv

2511.16209

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Optimized PSM shields significantly reduce system prompt leakage across a comprehensive set of extraction attacks while maintaining semantic fidelity to baseline model outputs, outperforming established heuristic defenses.

PSM (Prompt Sensitivity Minimization)

Novel technique introduced


System prompts are critical for guiding the behavior of Large Language Models (LLMs), yet they often contain proprietary logic or sensitive information, making them a prime target for extraction attacks. Adversarial queries can successfully elicit these hidden instructions, posing significant security and privacy risks. Existing defense mechanisms frequently rely on heuristics, incur substantial computational overhead, or are inapplicable to models accessed via black-box APIs. This paper introduces a novel framework for hardening system prompts through shield appending, a lightweight approach that adds a protective textual layer to the original prompt. Our core contribution is the formalization of prompt hardening as a utility-constrained optimization problem. We leverage an LLM-as-optimizer to search the space of possible SHIELDs, seeking to minimize a leakage metric derived from a suite of adversarial attacks, while simultaneously preserving task utility above a specified threshold, measured by semantic fidelity to baseline outputs. This black-box, optimization-driven methodology is lightweight and practical, requiring only API access to the target and optimizer LLMs. We demonstrate empirically that our optimized SHIELDs significantly reduce prompt leakage against a comprehensive set of extraction attacks, outperforming established baseline defenses without compromising the model's intended functionality. Our work presents a paradigm for developing robust, utility-aware defenses in the escalating landscape of LLM security. The code is made public on the following link: https://github.com/psm-defense/psm


Key Contributions

  • Formalizes system prompt hardening as a utility-constrained optimization problem that minimizes leakage while preserving task utility above a semantic fidelity threshold
  • Introduces shield appending — a lightweight static textual suffix optimized via LLM-as-optimizer that requires only black-box API access to the target model
  • Empirically demonstrates that optimized PSM shields outperform existing heuristic defenses against a comprehensive suite of system prompt extraction attacks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
system prompt protectionllm api securityproprietary prompt confidentiality