defense 2025

ExplainableGuard: Interpretable Adversarial Defense for Large Language Models Using Chain-of-Thought Reasoning

Shaowei Guan , Yu Zhai , Zhengyu Zhang , Yanze Wang , Hin Chi Kwok

0 citations · arXiv

α

Published on arXiv

2511.13771

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ExplainableGuard explanations outperform ablated variants in clarity, specificity, and actionability, with a 72.5% deployability-trust rating in human evaluation.

ExplainableGuard

Novel technique introduced


Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense mechanisms have been proposed, many operate as black boxes, lacking transparency in their decision-making. This paper introduces ExplainableGuard, an interpretable adversarial defense framework leveraging the chain-of-thought (CoT) reasoning capabilities of DeepSeek-Reasoner. Our approach not only detects and neutralizes adversarial perturbations in text but also provides step-by-step explanations for each defense action. We demonstrate how tailored CoT prompts guide the LLM to perform a multi-faceted analysis (character, word, structural, and semantic) and generate a purified output along with a human-readable justification. Preliminary results on the GLUE Benchmark and IMDB Movie Reviews dataset show promising defense efficacy. Additionally, a human evaluation study reveals that ExplainableGuard's explanations outperform ablated variants in clarity, specificity, and actionability, with a 72.5% deployability-trust rating, underscoring its potential for more trustworthy LLM deployments.


Key Contributions

  • ExplainableGuard framework that uses structured CoT prompts to guide DeepSeek-Reasoner through four-level adversarial analysis (character, word, structural, semantic) and produce purified text
  • Human-readable step-by-step explanations for each defense decision, achieving a 72.5% deployability-trust rating in human evaluation
  • Preliminary defense efficacy demonstrated on GLUE Benchmark and IMDB Movie Reviews against multiple adversarial attack types

🛡️ Threat Analysis

Input Manipulation Attack

Proposes an input purification defense against adversarial text perturbations (character-level homoglyphs/typos, word-level synonym substitutions) that cause LLMs to produce incorrect outputs at inference time — directly addressing evasion/input manipulation attacks on NLP classifiers evaluated on GLUE and IMDB.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timedigitalblack_box
Datasets
GLUE BenchmarkIMDB Movie Reviews
Applications
text classificationllm safety