defense 2025

Safeguarding Large Language Models in Real-time with Tunable Safety-Performance Trade-offs

Joao Fonseca , Andrew Bell , Julia Stoyanovich

0 citations

α

Published on arXiv

2501.02018

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SafeNudge reduces successful jailbreak attempts by 30% while adding minimal latency and having negligible impact on semantic fluency of outputs

SafeNudge

Novel technique introduced


Large Language Models (LLMs) have been shown to be susceptible to jailbreak attacks, or adversarial attacks used to illicit high risk behavior from a model. Jailbreaks have been exploited by cybercriminals and blackhat actors to cause significant harm, highlighting the critical need to safeguard widely-deployed models. Safeguarding approaches, which include fine-tuning models or having LLMs "self-reflect", may lengthen the inference time of a model, incur a computational penalty, reduce the semantic fluency of an output, and restrict ``normal'' model behavior. Importantly, these Safety-Performance Trade-offs (SPTs) remain an understudied area. In this work, we introduce a novel safeguard, called SafeNudge, that combines Controlled Text Generation with "nudging", or using text interventions to change the behavior of a model. SafeNudge triggers during text-generation while a jailbreak attack is being executed, and can reduce successful jailbreak attempts by 30% by guiding the LLM towards a safe responses. It adds minimal latency to inference and has a negligible impact on the semantic fluency of outputs. Further, we allow for tunable SPTs. SafeNudge is open-source and available through https://pypi.org/, and is compatible with models loaded with the Hugging Face "transformers" library.


Key Contributions

  • SafeNudge: a real-time safeguard combining Controlled Text Generation with 'nudging' (text interventions) that activates mid-generation when a jailbreak attempt is detected
  • Reduces successful jailbreak attempts by 30% with minimal inference latency overhead and negligible impact on semantic fluency
  • Tunable Safety-Performance Trade-offs (SPTs) allowing practitioners to explicitly balance safety strictness against model utility

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm safetyjailbreak defensecontent moderation