defense 2025

Towards Inclusive Toxic Content Moderation: Addressing Vulnerabilities to Adversarial Attacks in Toxicity Classifiers Tackling LLM-generated Content

Shaz Furniturewala 1, Arkaitz Zubiaga 2

0 citations

α

Published on arXiv

2509.12672

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Models have distinct attention heads for performance vs. vulnerability; suppressing vulnerable heads improves adversarial robustness, with different heads responsible for vulnerability across demographic groups

Vulnerable Circuit Suppression via Mechanistic Interpretability

Novel technique introduced


The volume of machine-generated content online has grown dramatically due to the widespread use of Large Language Models (LLMs), leading to new challenges for content moderation systems. Conventional content moderation classifiers, which are usually trained on text produced by humans, suffer from misclassifications due to LLM-generated text deviating from their training data and adversarial attacks that aim to avoid detection. Present-day defence tactics are reactive rather than proactive, since they rely on adversarial training or external detection models to identify attacks. In this work, we aim to identify the vulnerable components of toxicity classifiers that contribute to misclassification, proposing a novel strategy based on mechanistic interpretability techniques. Our study focuses on fine-tuned BERT and RoBERTa classifiers, testing on diverse datasets spanning a variety of minority groups. We use adversarial attacking techniques to identify vulnerable circuits. Finally, we suppress these vulnerable circuits, improving performance against adversarial attacks. We also provide demographic-level insights into these vulnerable circuits, exposing fairness and robustness gaps in model training. We find that models have distinct heads that are either crucial for performance or vulnerable to attack and suppressing the vulnerable heads improves performance on adversarial input. We also find that different heads are responsible for vulnerability across different demographic groups, which can inform more inclusive development of toxicity detection models.


Key Contributions

  • Framework to identify attention heads that are either crucial for model performance or vulnerable to adversarial attacks in fine-tuned BERT/RoBERTa toxicity classifiers
  • Demonstration that suppressing vulnerable (non-crucial) attention heads improves accuracy on adversarial inputs without harming clean performance
  • Demographic-level attribution of vulnerable circuits, revealing which heads are responsible for robustness gaps across different minority groups

🛡️ Threat Analysis

Input Manipulation Attack

Adversarial text perturbations (word substitution, case changes) cause misclassification of BERT/RoBERTa toxicity classifiers at inference time; the paper proposes a defense by identifying and suppressing vulnerable attention-head circuits using mechanistic interpretability techniques.


Details

Domains
nlp
Model Types
transformer
Threat Tags
white_boxinference_time
Datasets
ToxiGenJigsaw Unintended Bias in Toxicity ClassificationETHOSHatEval
Applications
toxic content moderationtext classification