defense 2026

Harnessing Hyperbolic Geometry for Harmful Prompt Detection and Sanitization

Igor Maljkovic 1, Maria Rosaria Briglia 2, Iacopo Masi 2, Antonio Emanuele Cinà 1, Fabio Roli 1,3

0 citations

α

Published on arXiv

2604.06285

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Consistently outperforms five state-of-the-art detection methods across six datasets and remains robust under multiple adversarial scenarios where existing defenses fail

HyPE and HyPS

Novel technique introduced


Vision-Language Models (VLMs) have become essential for tasks such as image synthesis, captioning, and retrieval by aligning textual and visual information in a shared embedding space. Yet, this flexibility also makes them vulnerable to malicious prompts designed to produce unsafe content, raising critical safety concerns. Existing defenses either rely on blacklist filters, which are easily circumvented, or on heavy classifier-based systems, both of which are costly and fragile under embedding-level attacks. We address these challenges with two complementary components: Hyperbolic Prompt Espial (HyPE) and Hyperbolic Prompt Sanitization (HyPS). HyPE is a lightweight anomaly detector that leverages the structured geometry of hyperbolic space to model benign prompts and detect harmful ones as outliers. HyPS builds on this detection by applying explainable attribution methods to identify and selectively modify harmful words, neutralizing unsafe intent while preserving the original semantics of user prompts. Through extensive experiments across multiple datasets and adversarial scenarios, we prove that our framework consistently outperforms prior defenses in both detection accuracy and robustness. Together, HyPE and HyPS offer an efficient, interpretable, and resilient approach to safeguarding VLMs against malicious prompt misuse.


Key Contributions

  • HyPE: lightweight hyperbolic SVDD-based anomaly detector that models benign prompts and detects harmful ones as outliers with single-parameter training
  • HyPS: explainable sanitization mechanism using attribution methods to identify and modify harmful words while preserving prompt semantics
  • Comprehensive evaluation showing robustness against MMA-Diffusion, SneakyPrompt-RL, StyleAttack, and novel white-box adaptive attacks

🛡️ Threat Analysis


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timeblack_box
Datasets
six diverse datasets mentioned (specific names not provided in excerpt)
Applications
text-to-image generationimage retrievalvision-language models