defense 2025

Adversarial Distilled Retrieval-Augmented Guarding Model for Online Malicious Intent Detection

Yihao Guo 1, Haocheng Bian 1, Liutong Zhou 2, Ze Wang 1, Zhaoyi Zhang 3, Francois Kawala 1, Milan Dean 4, Ian Fischer 1, Yuantao Peng 1, Noyan Tokgozoglu 1, Ivan Barrientos 1, Riyaaz Shaik 1, Rachel Li 1, Chandru Venkataraman 1, Reza Shifteh Far 1, Moses Pawar 1, Venkat Sundaranatha 1, Michael Xu 1, Frank Chu 5

0 citations

α

Published on arXiv

2509.14622

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A 149M-parameter model achieves 98.5% of WildGuard-7B performance and outperforms GPT-4 by 3.3% on OOD detection while sustaining under 6ms latency at 300 queries per second.

ADRAG (Adversarial Distilled Retrieval-Augmented Guard)

Novel technique introduced


With the deployment of Large Language Models (LLMs) in interactive applications, online malicious intent detection has become increasingly critical. However, existing approaches fall short of handling diverse and complex user queries in real time. To address these challenges, we introduce ADRAG (Adversarial Distilled Retrieval-Augmented Guard), a two-stage framework for robust and efficient online malicious intent detection. In the training stage, a high-capacity teacher model is trained on adversarially perturbed, retrieval-augmented inputs to learn robust decision boundaries over diverse and complex user queries. In the inference stage, a distillation scheduler transfers the teacher's knowledge into a compact student model, with a continually updated knowledge base collected online. At deployment, the compact student model leverages top-K similar safety exemplars retrieved from the online-updated knowledge base to enable both online and real-time malicious query detection. Evaluations across ten safety benchmarks demonstrate that ADRAG, with a 149M-parameter model, achieves 98.5% of WildGuard-7B's performance, surpasses GPT-4 by 3.3% and Llama-Guard-3-8B by 9.5% on out-of-distribution detection, while simultaneously delivering up to 5.6x lower latency at 300 queries per second (QPS) in real-time applications.


Key Contributions

  • ADRAG framework combining Retrieval-Augmented Adversarial Fine-Tuning (RAFT) and Selective Knowledge Distillation (SKD) for robust, real-time malicious intent detection
  • A 149M-parameter student guard model that matches GPT-4 and WildGuard-7B accuracy while delivering sub-6ms latency at 300 QPS
  • Ablation studies showing RAFT and SKD are complementary: RAFT improves robustness on adversarial/OOD queries; SKD transfers knowledge efficiently into a deployable compact model

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
WildGuard benchmarkten safety benchmarks (unspecified in excerpt)
Applications
llm safety guardrailmalicious intent detectionjailbreak detection