α

Published on arXiv

2510.02422

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DTA achieves 87%+ average attack success rate on safety-aligned LLMs within 200 iterations (white-box), exceeding SOTA baselines by 15%+ and reducing optimization time by 2–26x; 77.5% ASR in black-box transfer setting.

DTA (Dynamic Target Attack)

Novel technique introduced


Existing gradient-based jailbreak attacks typically optimize an adversarial suffix to induce a fixed affirmative response, e.g., ``Sure, here is...''. However, this fixed target usually resides in an extremely low-density region of a safety-aligned LLM's output distribution. Due to the substantial discrepancy between the fixed target and the output distribution, existing attacks require numerous iterations to optimize the adversarial prompt, which might still fail to induce the low-probability target response. To address this limitation, we propose Dynamic Target Attack (DTA), which leverages the target LLM's own responses as adaptive targets. In each optimization round, DTA samples multiple candidates from the output distribution conditioned on the current prompt, and selects the most harmful one as a temporary target for prompt optimization. Extensive experiments demonstrate that, under the white-box setting, DTA achieves over 87% average attack success rate (ASR) within 200 optimization iterations on recent safety-aligned LLMs, exceeding the state-of-the-art baselines by over 15% and reducing wall-clock time by 2-26x. Under the black-box setting, DTA employs a white-box LLM as a surrogate model for gradient-based optimization, achieving an average ASR of 77.5% against black-box models, exceeding prior transfer-based attacks by over 12%.


Key Contributions

  • Dynamic target selection: in each optimization round, samples multiple candidate responses from the LLM's own output distribution and selects the most harmful as the temporary optimization target, avoiding the low-density fixed-target problem.
  • Achieves 87%+ average ASR within 200 iterations under white-box setting, outperforming GCG-style baselines by 15%+ while reducing wall-clock time by 2–26x.
  • Demonstrates strong black-box transferability (77.5% ASR) by using a white-box surrogate LLM for gradient optimization, exceeding prior transfer-based attacks by 12%.

🛡️ Threat Analysis

Input Manipulation Attack

DTA optimizes adversarial suffixes via gradient-based token-level perturbations (GCG-style) to cause safety-aligned LLMs to generate harmful outputs — classic adversarial suffix optimization at inference time.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeted
Datasets
AdvBench
Applications
safety-aligned llmschatbots