benchmark 2025

Toward Understanding the Transferability of Adversarial Suffixes in Large Language Models

Sarah Ball 1, Niki Hasrati 2, Alexander Robey 2, Avi Schwarzschild 2, Frauke Kreuter 1,3, Zico Kolter 2, Andrej Risteski 2

0 citations · 38 references · arXiv

α

Published on arXiv

2510.22014

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Refusal-direction geometry in LLM activation space — not prompt semantic similarity — is the primary driver of adversarial suffix transfer success across both prompts and models

GCG (Greedy Coordinate Gradient)

Novel technique introduced


Discrete optimization-based jailbreaking attacks on large language models aim to generate short, nonsensical suffixes that, when appended onto input prompts, elicit disallowed content. Notably, these suffixes are often transferable -- succeeding on prompts and models for which they were never optimized. And yet, despite the fact that transferability is surprising and empirically well-established, the field lacks a rigorous analysis of when and why transfer occurs. To fill this gap, we identify three statistical properties that strongly correlate with transfer success across numerous experimental settings: (1) how much a prompt without a suffix activates a model's internal refusal direction, (2) how strongly a suffix induces a push away from this direction, and (3) how large these shifts are in directions orthogonal to refusal. On the other hand, we find that prompt semantic similarity only weakly correlates with transfer success. These findings lead to a more fine-grained understanding of transferability, which we use in interventional experiments to showcase how our statistical analysis can translate into practical improvements in attack success.


Key Contributions

  • Identifies three statistical properties in LLM activation space that strongly predict adversarial suffix transfer success: refusal direction activation of the bare prompt, suffix-induced push away from the refusal direction, and magnitude of orthogonal activation shifts
  • Shows that prompt semantic similarity is only a weak predictor of transfer success, challenging a common intuition in the field
  • Validates the statistical analysis through interventional experiments that translate mechanistic insights into measurable improvements in jailbreak attack success

🛡️ Threat Analysis

Input Manipulation Attack

Adversarial suffixes are discrete-optimization token-level perturbations — the paper directly analyzes the transferability mechanics of these input manipulation attacks on LLMs, including interventional experiments improving attack success rates.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeted
Applications
llm safety alignmentjailbreaking