α

Published on arXiv

2501.02629

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reduces jailbreak attack success rate and harmfulness across multiple state-of-the-art attacks on Llama2 and Mistral without degrading benign query performance compared to prior defenses.

LED (Layer-specific Editing) / Layer-AdvPatcher

Novel technique introduced


As large language models (LLMs) are increasingly deployed in diverse applications, including chatbot assistants and code generation, aligning their behavior with safety and ethical standards has become paramount. However, jailbreak attacks, which exploit vulnerabilities to elicit unintended or harmful outputs, threaten LLMs' safety significantly. In this paper, we introduce Layer-AdvPatcher, a novel methodology designed to defend against jailbreak attacks by utilizing an unlearning strategy to patch specific layers within LLMs through self-augmented datasets. Our insight is that certain layer(s), tend to produce affirmative tokens when faced with harmful prompts. By identifying these layers and adversarially exposing them to generate more harmful data, one can understand their inherent and diverse vulnerabilities to attacks. With these exposures, we then "unlearn" these issues, reducing the impact of affirmative tokens and hence minimizing jailbreak risks while keeping the model's responses to safe queries intact. We conduct extensive experiments on two models, four benchmark datasets, and multiple state-of-the-art jailbreak attacks to demonstrate the efficacy of our approach. Results indicate that our framework reduces the harmfulness and attack success rate of jailbreak attacks without compromising utility for benign queries compared to recent defense methods. Our code is publicly available at: https://github.com/oyy2000/LayerAdvPatcher


Key Contributions

  • Identifies 'safety layers' and 'toxic layers' within LLMs that are responsible for generating affirmative tokens in response to harmful prompts
  • Proposes Layer-AdvPatcher / LED, which adversarially exposes vulnerable layers to generate diverse harmful data and then applies an unlearning strategy to patch them
  • Demonstrates reduced attack success rate and harmfulness across multiple LLMs (Llama2, Mistral) and four benchmark datasets while preserving utility on benign queries

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
AdvBenchHarmBench
Applications
chatbot assistantscode generationllm safety alignment