defense 2026

Projecting Out the Malice: A Global Subspace Approach to LLM Detoxification

Zenghao Duan 1,2, Zhiyi Yin 1, Zhichao Shi 1,2, Liang Pang 1, Shaoling Jing 1, Zihe Huang 1, Jiayi Wu 3, Yu Yan 1,2, Jingcheng Deng 1, Huawei Shen 1, Xueqi Cheng 1

1 citations · arXiv

α

Published on arXiv

2601.06226

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

GLOSS achieves state-of-the-art detoxification on LLMs (including Qwen3) while preserving general capabilities and resisting adversarial reactivation of toxicity, without requiring large-scale labeled data or full retraining.

GLOSS (Global Toxic Subspace Suppression)

Novel technique introduced


Large language models (LLMs) exhibit exceptional performance but pose inherent risks of generating toxic content, restricting their safe deployment. While traditional methods (e.g., alignment) adjust output preferences, they fail to eliminate underlying toxic regions in parameters, leaving models vulnerable to adversarial attacks. Prior mechanistic studies characterize toxic regions as "toxic vectors" or "layer-wise subspaces", yet our analysis identifies critical limitations: i) Removed toxic vectors can be reconstructed via linear combinations of non-toxic vectors, demanding targeting of entire toxic subspace; ii) Contrastive objective over limited samples inject noise into layer-wise subspaces, hindering stable extraction. These highlight the challenge of identifying robust toxic subspace and removing them. Therefore, we propose GLOSS (GLobal tOxic Subspace Suppression), a lightweight method that mitigates toxicity by identifying and eliminating this global subspace from FFN parameters. Experiments on LLMs (e.g., Qwen3) show GLOSS achieves SOTA detoxification while preserving general capabilities without requiring large-scale retraining. WARNING: This paper contains context which is toxic in nature.


Key Contributions

  • Identifies critical limitations of prior toxic-vector and layer-wise subspace approaches: removed toxic vectors are reconstructable via linear combinations of non-toxic vectors, and contrastive objectives inject noise into layer-wise subspaces
  • Proposes GLOSS, which identifies a shared global toxic subspace across transformer layers and projects it out of FFN parameters without large-scale retraining
  • Demonstrates SOTA detoxification on LLMs including Qwen3 while preserving general model capabilities

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtraining_time
Applications
llm safetytext generationllm detoxification