attack 2026

Beyond Suffixes: Token Position in GCG Adversarial Attacks on Large Language Models

Hicham Eddoubi 1,2, Umar Faruk Abdullahi 3, Fadi Hassan 3

0 citations · 23 references · arXiv (Cornell University)

α

Published on arXiv

2602.03265

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Varying adversarial token placement (prefix vs. suffix) during optimization and evaluation yields non-trivial differences in jailbreak ASR across five 7B-class LLMs, showing current suffix-only evaluations underestimate real-world risk.

GCG-prefix

Novel technique introduced


Large Language Models (LLMs) have seen widespread adoption across multiple domains, creating an urgent need for robust safety alignment mechanisms. However, robustness remains challenging due to jailbreak attacks that bypass alignment via adversarial prompts. In this work, we focus on the prevalent Greedy Coordinate Gradient (GCG) attack and identify a previously underexplored attack axis in jailbreak attacks typically framed as suffix-based: the placement of adversarial tokens within the prompt. Using GCG as a case study, we show that both optimizing attacks to generate prefixes instead of suffixes and varying adversarial token position during evaluation substantially influence attack success rates. Our findings highlight a critical blind spot in current safety evaluations and underline the need to account for the position of adversarial tokens in the adversarial robustness evaluation of LLMs.


Key Contributions

  • Identifies adversarial token position (prefix vs. suffix) as a previously underexplored attack axis in GCG-style jailbreak attacks
  • Introduces GCG-prefix, a variant of GCG that prepends and optimizes adversarial tokens rather than appending them, achieving non-trivial ASR variation in both white-box and black-box cross-model settings
  • Shows that safety evaluations that fix adversarial tokens to a single position overestimate LLM robustness, highlighting a critical blind spot

🛡️ Threat Analysis

Input Manipulation Attack

GCG is a gradient-based adversarial token optimization attack on LLMs — the paper extends it by varying adversarial token placement (prefix vs. suffix), a direct extension of adversarial suffix optimization at inference time.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeted
Datasets
AdvBench
Applications
llm safety alignmentchatbot jailbreaking