attack 2026

CRaFT: Circuit-Guided Refusal Feature Selection via Cross-Layer Transcoders

Su-Hyeon Kim , Hyundong Jin , Yejin Lee , Yo-Sub Han

0 citations

α

Published on arXiv

2604.01604

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Improves attack success rate from 6.7% to 48.2% on Gemma-3-1B-it by selecting refusal features based on circuit influence rather than activation magnitude

CRaFT

Novel technique introduced


As safety concerns around large language models (LLMs) grow, understanding the internal mechanisms underlying refusal behavior has become increasingly important. Recent work has studied this behavior by identifying internal features associated with refusal and manipulating them to induce compliance with harmful requests. However, existing refusal feature selection methods rely on how strongly features activate on harmful prompts, which tends to capture superficial signals rather than the causal factors underlying the refusal decision. We propose CRaFT, a circuit-guided refusal feature selection framework that ranks features by their influence on the model's refusal-compliance decision using prompts near the refusal boundary. On Gemma-3-1B-it, CRaFT improves attack success rate (ASR) from 6.7% to 48.2% and outperforms baseline methods across multiple jailbreak benchmarks. These results suggest that circuit influence is a more reliable criterion than activation magnitude for identifying features that causally mediate refusal behavior.


Key Contributions

  • Circuit-guided refusal feature selection (CRaFT) that ranks features by causal influence on refusal decisions rather than activation magnitude
  • Boundary-critical sampling technique that identifies prompts near the refusal-compliance boundary for controlled feature analysis
  • Demonstrates 48.2% ASR on Gemma-3-1B-it, improving over 6.7% baseline by targeting causally relevant internal features

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_time
Datasets
jailbreak benchmarks (specific names not provided in excerpt)
Applications
llm safetycontent moderationaligned chatbots