attack 2025

Red-Bandit: Test-Time Adaptation for LLM Red-Teaming via Bandit-Guided LoRA Experts

Christos Ziakas , Nicholas Loo , Nishita Jain , Alessandra Russo

0 citations · 46 references · arXiv

α

Published on arXiv

2510.07239

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Red-Bandit achieves state-of-the-art attack success rate (ASR@10) on AdvBench across open-source and proprietary LLMs while producing more human-readable prompts (lower perplexity) than competing methods.

Red-Bandit

Novel technique introduced


Automated red-teaming has emerged as a scalable approach for auditing Large Language Models (LLMs) prior to deployment, yet existing approaches lack mechanisms to efficiently adapt to model-specific vulnerabilities at inference. We introduce Red-Bandit, a red-teaming framework that adapts online to identify and exploit model failure modes under distinct attack styles (e.g., manipulation, slang). Red-Bandit post-trains a set of parameter-efficient LoRA experts, each specialized for a particular attack style, using reinforcement learning that rewards the generation of unsafe prompts via a rule-based safety model. At inference, a multi-armed bandit policy dynamically selects among these attack-style experts based on the target model's response safety, balancing exploration and exploitation. Red-Bandit achieves state-of-the-art results on AdvBench under sufficient exploration (ASR@10), while producing more human-readable prompts (lower perplexity). Moreover, Red-Bandit's bandit policy serves as a diagnostic tool for uncovering model-specific vulnerabilities by indicating which attack styles most effectively elicit unsafe behaviors.


Key Contributions

  • Red-Bandit framework using multi-armed bandit to dynamically select among attack-style LoRA experts at test time, balancing exploration and exploitation of LLM vulnerabilities
  • RL post-training pipeline using GRPO to train diverse LoRA experts specialized for distinct attack styles (manipulation, slang, role-playing, etc.) with prompt-level safety rewards
  • Diagnostic capability that reveals which attack styles are most effective against specific target LLMs, achieving state-of-the-art ASR@10 on AdvBench with lower perplexity (more human-readable) prompts

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
AdvBenchHarmBench
Applications
llm safety auditingautomated red-teamingjailbreaking