defense 2026

AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

Weiming Song 1, Xuan Xie 2, Ruiping Yin 1

0 citations · 73 references · arXiv (Cornell University)

α

Published on arXiv

2602.13547

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A 7B model using AISA achieves detector-level jailbreak detection competitive with strong proprietary baselines across 13 datasets and 12 LLMs while preserving helpfulness and reducing false refusals.

AISA

Novel technique introduced


Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion to the inferred risk, ranging from normal generation for benign prompts to calibrated refusal for high-risk requests -- without changing model parameters, adding auxiliary modules, or requiring multi-pass inference. Extensive experiments spanning 13 datasets, 12 LLMs, and 14 baselines demonstrate that AISA improves robustness and transfer while preserving utility and reducing false refusals, enabling safer deployment even for weakly aligned or intentionally risky model variants.


Key Contributions

  • Spatiotemporal analysis localizing intrinsic safety signals to specific attention heads in LLMs, enabling interpretable prompt-risk scoring without auxiliary modules or fine-tuning.
  • Logits-level steering mechanism that modulates decoding distributions proportionally to inferred prompt risk — ranging from normal generation to calibrated refusal — in a single forward pass.
  • Extensive evaluation across 13 datasets, 12 LLMs, and 14 baselines demonstrating improved robustness, utility preservation, and transfer to weakly aligned or uncensored model variants.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
AdvBenchHarmBench
Applications
llm safetyjailbreak defensecontent moderation