attack 2026

RECAP: A Resource-Efficient Method for Adversarial Prompting in Large Language Models

Rishit Chugh

0 citations · 10 references · arXiv

α

Published on arXiv

2601.15331

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Retrieval of semantically similar pre-trained adversarial suffixes achieves comparable or superior attack success rates to GCG without per-prompt retraining, dramatically reducing computational cost.

RECAP

Novel technique introduced


The deployment of large language models (LLMs) has raised security concerns due to their susceptibility to producing harmful or policy-violating outputs when exposed to adversarial prompts. While alignment and guardrails mitigate common misuse, they remain vulnerable to automated jailbreaking methods such as GCG, PEZ, and GBDA, which generate adversarial suffixes via training and gradient-based search. Although effective, these methods particularly GCG are computationally expensive, limiting their practicality for organisations with constrained resources. This paper introduces a resource-efficient adversarial prompting approach that eliminates the need for retraining by matching new prompts to a database of pre-trained adversarial prompts. A dataset of 1,000 prompts was classified into seven harm-related categories, and GCG, PEZ, and GBDA were evaluated on a Llama 3 8B model to identify the most effective attack method per category. Results reveal a correlation between prompt type and algorithm effectiveness. By retrieving semantically similar successful adversarial prompts, the proposed method achieves competitive attack success rates with significantly reduced computational cost. This work provides a practical framework for scalable red-teaming and security evaluation of aligned LLMs, including in settings where model internals are inaccessible.


Key Contributions

  • RECAP: a retrieval-augmented adversarial prompting framework that matches new prompts to semantically similar pre-trained adversarial suffixes, eliminating per-prompt gradient-based retraining
  • Empirical evaluation of GCG, PEZ, and GBDA across 7 harm categories on Llama 3 8B, revealing that prompt type correlates with which attack algorithm is most effective
  • A practical scalable red-teaming framework that achieves competitive jailbreak success rates at significantly reduced computational cost, including in settings with limited model access

🛡️ Threat Analysis

Input Manipulation Attack

The core attack pipeline builds on gradient-based adversarial suffix optimization (GCG, PEZ, GBDA) — token-level perturbations crafted via gradient search that bypass safety alignment at inference time. RECAP extends this by retrieving pre-computed adversarial suffixes rather than regenerating them.


Details

Domains
nlp
Model Types
llm
Threat Tags
white_boxgrey_boxinference_timetargeteddigital
Datasets
Custom 1,000-prompt harm dataset (7 categories)Llama 3 8B
Applications
llm safety evaluationred-teamingjailbreaking aligned llms