attack 2025

Universal and Transferable Adversarial Attack on Large Language Models Using Exponentiated Gradient Descent

Sajib Biswas , Mao Nishino , Samuel Jacob Chacko , Xiuwen Liu

0 citations

α

Published on arXiv

2508.14853

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves higher jailbreak success rates and faster convergence than three state-of-the-art baselines across five open-source LLMs and four adversarial behavior datasets, with demonstrated transferability to different models

Exponentiated Gradient Descent (EGD) adversarial suffix attack

Novel technique introduced


As large language models (LLMs) are increasingly deployed in critical applications, ensuring their robustness and safety alignment remains a major challenge. Despite the overall success of alignment techniques such as reinforcement learning from human feedback (RLHF) on typical prompts, LLMs remain vulnerable to jailbreak attacks enabled by crafted adversarial triggers appended to user prompts. Most existing jailbreak methods either rely on inefficient searches over discrete token spaces or direct optimization of continuous embeddings. While continuous embeddings can be given directly to selected open-source models as input, doing so is not feasible for proprietary models. On the other hand, projecting these embeddings back into valid discrete tokens introduces additional complexity and often reduces attack effectiveness. We propose an intrinsic optimization method which directly optimizes relaxed one-hot encodings of the adversarial suffix tokens using exponentiated gradient descent coupled with Bregman projection, ensuring that the optimized one-hot encoding of each token always remains within the probability simplex. We provide theoretical proof of convergence for our proposed method and implement an efficient algorithm that effectively jailbreaks several widely used LLMs. Our method achieves higher success rates and faster convergence compared to three state-of-the-art baselines, evaluated on five open-source LLMs and four adversarial behavior datasets curated for evaluating jailbreak methods. In addition to individual prompt attacks, we also generate universal adversarial suffixes effective across multiple prompts and demonstrate transferability of optimized suffixes to different LLMs.


Key Contributions

  • Intrinsic optimization method that directly optimizes relaxed one-hot token encodings within the probability simplex using exponentiated gradient descent and Bregman projection, avoiding the discrete projection problem
  • Theoretical convergence proof for the proposed exponentiated gradient descent approach applied to adversarial suffix optimization
  • Demonstrates universal adversarial suffixes effective across multiple prompts and transferability across different LLMs, outperforming three state-of-the-art baselines on five open-source models

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a gradient-based adversarial attack (exponentiated gradient descent with Bregman projection) that optimizes token-level adversarial suffixes — this is adversarial suffix optimization, not natural language prompt manipulation, placing it squarely in ML01 alongside LLM01.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
AdvBench
Applications
large language model safety alignmentjailbreak attack evaluation