defense 2025

Seed-Induced Uniqueness in Transformer Models: Subspace Alignment Governs Subliminal Transfer

Ayşe Selin Okatan , Mustafa İlhan Akbaş , Laxima Niure Kandel , Berker Peköz

0 citations · 34 references · Computer Assisted Radiology an...

α

Published on arXiv

2511.01023

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

Different-seed transformer students show substantially reduced subliminal leakage (τ≈0.12–0.13) versus same-seed students (τ≈0.24) despite global CKA >0.9, proving trait-subspace alignment rather than global similarity governs covert transfer.

Subspace-level CKA diagnostic

Novel technique introduced


We analyze subliminal transfer in Transformer models, where a teacher embeds hidden traits that can be linearly decoded by a student without degrading main-task performance. Prior work often attributes transferability to global representational similarity, typically quantified with Centered Kernel Alignment (CKA). Using synthetic corpora with disentangled public and private labels, we distill students under matched and independent random initializations. We find that transfer strength hinges on alignment within a trait-discriminative subspace: same-seed students inherit this alignment and show higher leakage {τ\approx} 0.24, whereas different-seed students -- despite global CKA > 0.9 -- exhibit substantially reduced excess accuracy {τ\approx} 0.12 - 0.13. We formalize this with subspace-level CKA diagnostic and residualized probes, showing that leakage tracks alignment within the trait-discriminative subspace rather than global representational similarity. Security controls (projection penalty, adversarial reversal, right-for-the-wrong-reasons regularization) reduce leakage in same-base models without impairing public-task fidelity. These results establish seed-induced uniqueness as a resilience property and argue for subspace-aware diagnostics for secure multi-model deployments.


Key Contributions

  • Empirical demonstration that independent random initialization (seed-induced uniqueness) substantially reduces subliminal transfer (τ≈0.12–0.13 vs τ≈0.24) even when global CKA exceeds 0.9, establishing seed diversity as a resilience property
  • Subspace-level CKA diagnostic that predicts leakage by measuring alignment within the trait-discriminative subspace rather than global representational similarity
  • Evaluation of three security controls (projection penalty, adversarial gradient reversal, right-for-the-wrong-reasons regularization) that suppress trait-subspace leakage in same-seed scenarios without degrading main-task accuracy

🛡️ Threat Analysis

Transfer Learning Attack

The core threat is subliminal transfer through knowledge distillation — a teacher embeds hidden private traits in its representation subspace, and these traits survive the transfer learning process to become linearly decodable in the student. The paper evaluates how initialization choices affect whether this hidden behavior persists across the distillation boundary, and proposes defenses (projection penalty, adversarial reversal, RRWR regularization) to suppress trait-subspace alignment during transfer learning.


Details

Domains
nlp
Model Types
transformer
Threat Tags
training_timeinference_timetargetedgrey_box
Datasets
synthetic corpora with disentangled public and private labels
Applications
knowledge distillationsecure multi-model deploymentfederated learning