attack 2025

Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously

Andrew Adiletta 1, Kathryn Adiletta 2, Kemal Derya 2, Berk Sunar 2

0 citations · 55 references · arXiv

α

Published on arXiv

2512.11783

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Joint optimization of Super Suffixes successfully bypasses Llama Prompt Guard 2 on five different LLMs, while DeltaGuard detects these attacks at nearly 100% accuracy using residual stream concept-direction fingerprinting.

Super Suffixes

Novel technique introduced


The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML). LLMs are increasingly being used to process untrusted text inputs and even generate executable code, often while having access to sensitive system controls. To address these security concerns, several companies have introduced guard models, which are smaller, specialized models designed to protect text generation models from adversarial or malicious inputs. In this work, we advance the study of adversarial inputs by introducing Super Suffixes, suffixes capable of overriding multiple alignment objectives across various models with different tokenization schemes. We demonstrate their effectiveness, along with our joint optimization technique, by successfully bypassing the protection mechanisms of Llama Prompt Guard 2 on five different text generation models for malicious text and code generation. To the best of our knowledge, this is the first work to reveal that Llama Prompt Guard 2 can be compromised through joint optimization. Additionally, by analyzing the changing similarity of a model's internal state to specific concept directions during token sequence processing, we propose an effective and lightweight method to detect Super Suffix attacks. We show that the cosine similarity between the residual stream and certain concept directions serves as a distinctive fingerprint of model intent. Our proposed countermeasure, DeltaGuard, significantly improves the detection of malicious prompts generated through Super Suffixes. It increases the non-benign classification rate to nearly 100%, making DeltaGuard a valuable addition to the guard model stack and enhancing robustness against adversarial prompt attacks.


Key Contributions

  • Super Suffixes: gradient-optimized adversarial token suffixes that simultaneously bypass alignment objectives across multiple LLMs with different tokenization schemes
  • Joint optimization technique that is the first demonstrated method to compromise Llama Prompt Guard 2 across five text generation models
  • DeltaGuard: a lightweight detection countermeasure using cosine similarity between the residual stream and concept directions, achieving ~100% non-benign classification rate against Super Suffix attacks

🛡️ Threat Analysis

Input Manipulation Attack

Super Suffixes are gradient-optimized token-level adversarial inputs appended at inference time to cause LLMs to produce malicious outputs — a direct adversarial suffix optimization attack (analogous to GCG), qualifying as input manipulation at the token level.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
AdvBenchcustom malicious code dataset
Applications
text generationcode generationllm safety guardrails