An Invariant Latent Space Perspective on Language Model Inversion
Wentao Ye 1, Jiaqi Hu 1, Haobo Wang 1, Xinpeng Ti 1, Zhiqing Xiao 1, Hao Chen 1, Liyao Li 1, Lei Feng 2, Sai Wu 1, Junbo Zhao 1
Published on arXiv
2511.19569
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
Inv²A outperforms prior language model inversion baselines by an average of 4.77% BLEU score while reducing dependence on large inverse corpora, and shows that prevalent defenses provide limited protection.
Inv²A (Invariant Attacker)
Novel technique introduced
Language model inversion (LMI), i.e., recovering hidden prompts from outputs, emerges as a concrete threat to user privacy and system security. We recast LMI as reusing the LLM's own latent space and propose the Invariant Latent Space Hypothesis (ILSH): (1) diverse outputs from the same source prompt should preserve consistent semantics (source invariance), and (2) input<->output cyclic mappings should be self-consistent within a shared latent space (cyclic invariance). Accordingly, we present Inv^2A, which treats the LLM as an invariant decoder and learns only a lightweight inverse encoder that maps outputs to a denoised pseudo-representation. When multiple outputs are available, they are sparsely concatenated at the representation layer to increase information density. Training proceeds in two stages: contrastive alignment (source invariance) and supervised reinforcement (cyclic invariance). An optional training-free neighborhood search can refine local performance. Across 9 datasets covering user and system prompt scenarios, Inv^2A outperforms baselines by an average of 4.77% BLEU score while reducing dependence on large inverse corpora. Our analysis further shows that prevalent defenses provide limited protection, underscoring the need for stronger strategies. The source code and data involved in this paper can be found in https://github.com/yyy01/Invariant_Attacker.
Key Contributions
- Invariant Latent Space Hypothesis (ILSH) that recasts language model inversion as reusing the LLM's own latent space under source and cyclic invariance constraints
- Inv²A: a lightweight inverse encoder trained in two stages (contrastive alignment + supervised reinforcement) with optional training-free neighborhood search and multi-output sparse concatenation
- Empirical demonstration across 9 datasets that prevalent defenses against prompt inversion provide limited protection, highlighting the need for stronger countermeasures