defense 2026

Rendering Data Unlearnable by Exploiting LLM Alignment Mechanisms

Ruihan Zhang , Jun Sun

0 citations · 43 references · arXiv

α

Published on arXiv

2601.03401

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Models fine-tuned on disclaimer-injected data exhibit substantial and systematic performance degradation across multiple benchmarks, fine-tuning strategies, and model families compared to standard fine-tuning, without requiring access to model parameters.

Disclaimer Injection

Novel technique introduced


Large language models (LLMs) are increasingly trained on massive, heterogeneous text corpora, raising serious concerns about the unauthorised use of proprietary or personal data during model training. In this work, we address the problem of data protection against unwanted model learning in a realistic black-box setting. We propose Disclaimer Injection, a novel data-level defence that renders text unlearnable to LLMs. Rather than relying on model-side controls or explicit data removal, our approach exploits the models' own alignment mechanisms: by injecting carefully designed alignment-triggering disclaimers to prevent effective learning. Through layer-wise analysis, we find that fine-tuning on such protected data induces persistent activation of alignment-related layers, causing alignment constraints to override task learning even on common inputs. Consequently, models trained on such data exhibit substantial and systematic performance degradation compared to standard fine-tuning. Our results identify alignment behaviour as a previously unexplored lever for data protection and, to our knowledge, present the first practical method for restricting data learnability at LLM scale without requiring access to or modification of the training pipeline.


Key Contributions

  • Identifies LLM alignment behaviour as a novel lever for data protection, introducing alignment-triggered unlearnability as a new perspective
  • Proposes Disclaimer Injection: a black-box, model-agnostic method that injects alignment-triggering text into training data to prevent effective model learning without degrading human readability
  • Demonstrates via layer-wise causal analysis that training on protected data induces persistent activation of alignment-related layers, rerouting internal representations even for benign inputs

🛡️ Threat Analysis

Data Poisoning Attack

Disclaimer Injection modifies training data so that LLMs trained on it suffer systematic performance degradation — this is defensive data poisoning where the data owner corrupts their own data to impair a model trainer's learning. The attack vector is the training data itself, which is the defining characteristic of ML02.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxtraining_time
Applications
llm fine-tuningtext data protectionproprietary content protection