attack 2025

LatentBreak: Jailbreaking Large Language Models through Latent Space Feedback

Raffaele Mura 1, Giorgio Piras 1, Kamilė Lukošiūtė 2, Maura Pintor 3, Amin Karbasi 1, Battista Biggio 1

1 citations · 21 references · arXiv

α

Published on arXiv

2510.08604

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LatentBreak produces shorter, lower-perplexity jailbreak prompts than GCG, GBDA, SAA, and AutoDAN, achieving superior evasion of perplexity-based filters while maintaining high attack success rates on safety-aligned models.

LatentBreak

Novel technique introduced


Jailbreaks are adversarial attacks designed to bypass the built-in safety mechanisms of large language models. Automated jailbreaks typically optimize an adversarial suffix or adapt long prompt templates by forcing the model to generate the initial part of a restricted or harmful response. In this work, we show that existing jailbreak attacks that leverage such mechanisms to unlock the model response can be detected by a straightforward perplexity-based filtering on the input prompt. To overcome this issue, we propose LatentBreak, a white-box jailbreak attack that generates natural adversarial prompts with low perplexity capable of evading such defenses. LatentBreak substitutes words in the input prompt with semantically-equivalent ones, preserving the initial intent of the prompt, instead of adding high-perplexity adversarial suffixes or long templates. These words are chosen by minimizing the distance in the latent space between the representation of the adversarial prompt and that of harmless requests. Our extensive evaluation shows that LatentBreak leads to shorter and low-perplexity prompts, thus outperforming competing jailbreak algorithms against perplexity-based filters on multiple safety-aligned models.


Key Contributions

  • LatentBreak: a white-box jailbreak that substitutes words in harmful prompts with semantically-equivalent alternatives chosen by minimizing latent-space distance toward harmless prompt representations, yielding low-perplexity natural jailbreaks
  • Demonstrates that existing jailbreak attacks (GCG, GBDA, SAA, AutoDAN) are detectable by perplexity-based sliding-window filters, exposing a shared vulnerability across suffix- and template-based approaches
  • Extensive evaluation on HarmBench showing LatentBreak outperforms competing attacks against perplexity-based detectors, R2D2, and Circuit Breakers across multiple safety-aligned model families

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
HarmBench
Applications
safety-aligned llmschatbotsllm safety mechanisms