attack 2025

RepIt: Steering Language Models with Concept-Specific Refusal Vectors

Vincent Siu 1, Nathan W. Henry 2, Nicholas Crispino 1, Yang Liu 1, Dawn Song 2, Chenguang Wang 1

0 citations

α

Published on arXiv

2509.13281

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RepIt selectively suppresses LLM refusal on WMD-related topics using ~12 examples and 100-200 neurons while maintaining safe scores on standard safety benchmarks across five frontier LLMs

RepIt

Novel technique introduced


While activation steering in large language models (LLMs) is a growing area of research, methods can often incur broader effects than desired. This motivates isolation of purer concept vectors to enable targeted interventions and understand LLM behavior at a more granular level. We present RepIt, a simple and data-efficient framework for isolating concept-specific representations. Across five frontier LLMs, RepIt enables precise interventions: it selectively suppresses refusal on targeted concepts while preserving refusal elsewhere, producing models that answer WMD-related questions while still scoring as safe on standard benchmarks. We further show that the corrective signal localizes to just 100-200 neurons and that robust target representations can be extracted from as few as a dozen examples on a single A6000. This efficiency raises a dual concern: manipulations can be performed with modest compute and data to extend to underrepresented data-scarce topics while evading existing benchmarks. By disentangling refusal vectors with RepIt, this work demonstrates that targeted interventions can counteract overgeneralization, laying the foundation for more granular control of model behavior.


Key Contributions

  • RepIt framework for extracting concept-specific refusal vectors from LLM activations using as few as ~12 examples on modest compute (single A6000)
  • Demonstrates selective suppression of refusal on targeted topics (WMD) across five frontier LLMs while preserving safety scores on standard benchmarks, showing benchmark evasion
  • Localizes the corrective signal to 100-200 neurons, revealing the granular and data-scarce nature of targeted safety bypass

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Applications
llm safety mechanismsrefusal behaviorsafety benchmarks