attack 2026

Hide and Seek in Embedding Space: Geometry-based Steganography and Detection in Large Language Models

Charles Westphal 1,2, Keivan Navaie 3, Fernando E. Rosas 4,5,6

0 citations · 47 references · arXiv

α

Published on arXiv

2601.22818

Model Poisoning

OWASP ML Top 10 — ML10

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Geometry-based bucketing improves authorized exact recovery by up to 123% (9→19%) on Llama-70B-LoRA while reducing payload recoverability; linear probes on late-layer activations detect steganographic fine-tuning with 33% higher accuracy than distributional baselines.

Geometry-based low-recoverability LLM steganography

Novel technique introduced


Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels. Prior work demonstrated this threat but relied on trivially recoverable encodings. We formalize payload recoverability via classifier accuracy and show previous schemes achieve 100\% recoverability. In response, we introduce low-recoverability steganography, replacing arbitrary mappings with embedding-space-derived ones. For Llama-8B (LoRA) and Ministral-8B (LoRA) trained on TrojanStego prompts, exact secret recovery rises from 17$\rightarrow$30\% (+78\%) and 24$\rightarrow$43\% (+80\%) respectively, while on Llama-70B (LoRA) trained on Wiki prompts, it climbs from 9$\rightarrow$19\% (+123\%), all while reducing payload recoverability. We then discuss detection. We argue that detecting fine-tuning-based steganographic attacks requires approaches beyond traditional steganalysis. Standard approaches measure distributional shift, which is an expected side-effect of fine-tuning. Instead, we propose a mechanistic interpretability approach: linear probes trained on later-layer activations detect the secret with up to 33\% higher accuracy in fine-tuned models compared to base models, even for low-recoverability schemes. This suggests that malicious fine-tuning leaves actionable internal signatures amenable to interpretability-based defenses.


Key Contributions

  • Formalizes payload recoverability via classifier accuracy and shows prior TrojanStego scheme achieves 100% recoverability, motivating a harder-to-detect variant
  • Introduces geometry-based low-recoverability steganography using random hyperplane projections in the model's token embedding space, improving exact secret recovery by +78–123% while reducing unauthorized recoverability
  • Proposes mechanistic interpretability detection via linear probes on later-layer activations, outperforming distributional steganalysis by up to 33% accuracy even for low-recoverability schemes

🛡️ Threat Analysis

Model Poisoning

The core attack embeds trigger-activated hidden behavior in a model via malicious LoRA fine-tuning: when the prompt contains a trigger pattern (e.g., 'secret:abcd'), the model silently encodes the secret into output tokens while generating fluent text — a canonical backdoor/trojan.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargetedgrey_box
Datasets
TrojanStego promptsWikipedia prompts
Applications
llm deployment in air-gapped environmentssensitive enterprise llm infrastructure