Zero2Text: Zero-Training Cross-Domain Inversion Attacks on Textual Embeddings
Doohyun Kim 1, Donghwa Kang 1, Kyungjae Lee 2, Hyeongboo Baek 2, Brent ByungHoon Kang 1
Published on arXiv
2602.01757
Model Inversion Attack
OWASP ML Top 10 — ML03
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
Against the OpenAI victim embedding model on MS MARCO, Zero2Text achieves 1.8x higher ROUGE-L and 6.4x higher BLEU-2 than baselines, recovering sentences from unknown domains without any leaked data pairs.
Zero2Text
Novel technique introduced
The proliferation of retrieval-augmented generation (RAG) has established vector databases as critical infrastructure, yet they introduce severe privacy risks via embedding inversion attacks. Existing paradigms face a fundamental trade-off: optimization-based methods require computationally prohibitive queries, while alignment-based approaches hinge on the unrealistic assumption of accessible in-domain training data. These constraints render them ineffective in strict black-box and cross-domain settings. To dismantle these barriers, we introduce Zero2Text, a novel training-free framework based on recursive online alignment. Unlike methods relying on static datasets, Zero2Text synergizes LLM priors with a dynamic ridge regression mechanism to iteratively align generation to the target embedding on-the-fly. We further demonstrate that standard defenses, such as differential privacy, fail to effectively mitigate this adaptive threat. Extensive experiments across diverse benchmarks validate Zero2Text; notably, on MS MARCO against the OpenAI victim model, it achieves 1.8x higher ROUGE-L and 6.4x higher BLEU-2 scores compared to baselines, recovering sentences from unknown domains without a single leaked data pair.
Key Contributions
- Zero2Text: a training-free embedding inversion framework using recursive online alignment with LLM priors and dynamic ridge regression, requiring no in-domain data
- Demonstrates effectiveness in strict black-box, cross-domain settings where prior optimization-based and alignment-based methods fail
- Shows that standard differential privacy defenses do not effectively mitigate this adaptive embedding inversion threat
🛡️ Threat Analysis
Zero2Text is a textbook embedding inversion attack: an adversary reconstructs private text from embedding vectors returned by a black-box victim model (OpenAI embeddings). The core contribution is recovering original sentences from unknown-domain embeddings, which is data reconstruction from model outputs — the defining characteristic of ML03.