defense 2026

TRYLOCK: Defense-in-Depth Against LLM Jailbreaks via Layered Preference and Representation Engineering

Scott Thornton

0 citations · 20 references · arXiv

α

Published on arXiv

2601.03300

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 88.0% relative attack success rate reduction (46.5% to 5.6%) on Mistral-7B-Instruct across five jailbreak attack families, with each defense layer providing unique non-redundant coverage.

TRYLOCK

Novel technique introduced


Large language models remain vulnerable to jailbreak attacks, and single-layer defenses often trade security for usability. We present TRYLOCK, the first defense-in-depth architecture that combines four heterogeneous mechanisms across the inference stack: weight-level safety alignment via DPO, activation-level control via Representation Engineering (RepE) steering, adaptive steering strength selected by a lightweight sidecar classifier, and input canonicalization to neutralize encoding-based bypasses. On Mistral-7B-Instruct evaluated against a 249-prompt attack set spanning five attack families, TRYLOCK achieves 88.0% relative ASR reduction (46.5% to 5.6%), with each layer contributing unique coverage: RepE blocks 36% of attacks that bypass DPO alone, while canonicalization catches 14% of encoding attacks that evade both. We discover a non-monotonic steering phenomenon -- intermediate strength (alpha=1.0) degrades safety below baseline -- and provide mechanistic hypotheses explaining RepE-DPO interference. The adaptive sidecar reduces over-refusal from 60% to 48% while maintaining identical attack defense, demonstrating that security and usability need not be mutually exclusive. We release all components -- trained adapters, steering vectors, sidecar classifier, preference pairs, and complete evaluation methodology -- enabling full reproducibility.


Key Contributions

  • First defense-in-depth architecture combining four heterogeneous layers (DPO weight alignment, RepE activation steering, adaptive sidecar classifier, input canonicalization) into a unified LLM safety stack
  • Discovery of non-monotonic RepE steering phenomenon where intermediate alpha=1.0 degrades safety below baseline, with mechanistic hypotheses for RepE-DPO interference
  • Adaptive sidecar classifier that dynamically selects steering strength, reducing over-refusal from 60% to 48% while maintaining identical attack defense (8.0% ASR)

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
249-prompt attack set (5 attack families)AdvBench
Applications
large language model safetychatbotllm api deployment