attack 2025

Universal Adversarial Suffixes for Language Models Using Reinforcement Learning with Calibrated Reward

Sampriti Soor 1, Suklav Ghosh 2, Arijit Sur 2

0 citations · 18 references · arXiv

α

Published on arXiv

2512.08131

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

RL-trained adversarial suffixes achieve stronger and more transferable accuracy degradation across tasks and models than gradient-based and rule-based adversarial trigger baselines.

PPO-based adversarial suffix optimization with calibrated cross-entropy reward

Novel technique introduced


Language models are vulnerable to short adversarial suffixes that can reliably alter predictions. Previous works usually find such suffixes with gradient search or rule-based methods, but these are brittle and often tied to a single task or model. In this paper, a reinforcement learning framework is used where the suffix is treated as a policy and trained with Proximal Policy Optimization against a frozen model as a reward oracle. Rewards are shaped using calibrated cross-entropy, removing label bias and aggregating across surface forms to improve transferability. The proposed method is evaluated on five diverse NLP benchmark datasets, covering sentiment, natural language inference, paraphrase, and commonsense reasoning, using three distinct language models: Qwen2-1.5B Instruct, TinyLlama-1.1B Chat, and Phi-1.5. Results show that RL-trained suffixes consistently degrade accuracy and transfer more effectively across tasks and models than previous adversarial triggers of similar genres.


Key Contributions

  • Frames adversarial suffix generation as an RL problem using PPO with a frozen LM as a black-box reward oracle, replacing brittle gradient-based search
  • Designs a calibrated cross-entropy reward that removes label-surface bias and aggregates across label surface forms to improve cross-task and cross-model transferability
  • Demonstrates that RL-trained suffixes consistently degrade accuracy and transfer more effectively than prior adversarial trigger methods across five diverse NLP tasks and three LMs

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a novel method for adversarial suffix optimization at the token level — suffix sequences are trained via PPO to cause misclassification across NLP tasks at inference time. This is a classic input manipulation attack using discrete token perturbations, not natural language prompt manipulation.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timeuntargeteddigital
Datasets
SST-2SNLIQQPHellaSwagWinoGrande
Applications
text classificationnatural language inferenceparaphrase detectioncommonsense reasoningsentiment analysis