defense 2025

LeakSealer: A Semisupervised Defense for LLMs Against Prompt Injection and Leakage Attacks

Francesco Panebianco 1, Stefano Bonfanti 2, Francesco Trovò 1, Michele Carminati 1

0 citations

α

Published on arXiv

2508.00602

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

LeakSealer achieves AUPRC of 0.97 for PII leakage detection and highest precision/recall on ToxicChat for prompt injection, significantly outperforming Llama Guard.

LeakSealer

Novel technique introduced


The generalization capabilities of Large Language Models (LLMs) have led to their widespread deployment across various applications. However, this increased adoption has introduced several security threats, notably in the forms of jailbreaking and data leakage attacks. Additionally, Retrieval Augmented Generation (RAG), while enhancing context-awareness in LLM responses, has inadvertently introduced vulnerabilities that can result in the leakage of sensitive information. Our contributions are twofold. First, we introduce a methodology to analyze historical interaction data from an LLM system, enabling the generation of usage maps categorized by topics (including adversarial interactions). This approach further provides forensic insights for tracking the evolution of jailbreaking attack patterns. Second, we propose LeakSealer, a model-agnostic framework that combines static analysis for forensic insights with dynamic defenses in a Human-In-The-Loop (HITL) pipeline. This technique identifies topic groups and detects anomalous patterns, allowing for proactive defense mechanisms. We empirically evaluate LeakSealer under two scenarios: (1) jailbreak attempts, employing a public benchmark dataset, and (2) PII leakage, supported by a curated dataset of labeled LLM interactions. In the static setting, LeakSealer achieves the highest precision and recall on the ToxicChat dataset when identifying prompt injection. In the dynamic setting, PII leakage detection achieves an AUPRC of $0.97$, significantly outperforming baselines such as Llama Guard.


Key Contributions

  • Methodology to analyze historical LLM interaction data and generate topic-based usage maps that provide forensic insights into the evolution of jailbreaking attack patterns
  • LeakSealer: a model-agnostic semisupervised framework combining static forensic analysis and dynamic anomaly detection in a Human-In-The-Loop pipeline to defend against prompt injection and PII leakage
  • Empirical evaluation on ToxicChat (prompt injection) and a curated PII leakage dataset, achieving AUPRC of 0.97 and outperforming Llama Guard

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
ToxicChatcurated PII leakage dataset
Applications
llm-based applicationsrag systemschatbots