defense 2026

Good-Enough LLM Obfuscation (GELO)

Anatoly Belikov 1, Ilya Fedotov 2

0 citations

α

Published on arXiv

2603.05035

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

GELO preserves model outputs exactly while adding only ~20–30% latency overhead and successfully defeating ICA/BSS and anchor-based prompt reconstruction attacks on Llama-2 7B.

GELO (Good-Enough LLM Obfuscation)

Novel technique introduced


Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV caches and hidden states, threatening prompt privacy for open-source models. Cryptographic protections such as MPC and FHE offer strong guarantees but remain one to two orders of magnitude too slow for interactive inference, while static obfuscation schemes break under multi-run statistical attacks once the model is known. We present GELO (Good-Enough LLM Obfuscation), a lightweight protocol for privacy-preserving inference that limits information leakage from untrusted accelerator observations by hiding hidden states with fresh, per-batch invertible mixing. For each offloaded projection, the TEE samples a random matrix $A$, forms $U = AH$, offloads $U$ and weights W to the accelerator, and then applies $A^{-1}$ on return, so that $A^{-1}((AH)W ) = HW$ and outputs are unchanged. Because mixing is never reused across batches, the attacker faces only a single-batch blind source separation problem. We analyse information leakage and introduce two practical defences: (i) non-orthogonal mixing to mask Gram matrices, and (ii) orthogonal mixing augmented with a small fraction of high-energy "shield" vectors that pollute higher-order statistics. On Llama-2 7B, GELO preserves float32 outputs exactly, closely matches low-precision baselines, offloads the dominant matrix multiplications with about 20-30% latency overhead, and defeats a range of ICA/BSS and anchor-based attacks.


Key Contributions

  • GELO protocol: per-batch invertible random matrix mixing of hidden states offloaded to untrusted accelerators, computed inside a TEE so mixing matrices are never exposed
  • Two defenses against statistical reconstruction: non-orthogonal mixing to mask Gram matrices, and orthogonal mixing with high-energy 'shield' vectors to pollute higher-order statistics
  • Empirical evaluation on Llama-2 7B showing exact float32 output preservation, ~20–30% latency overhead, and defeat of ICA/BSS and anchor-based inversion attacks

🛡️ Threat Analysis

Model Inversion Attack

The core adversarial threat is embedding/hidden-state inversion: an attacker observes KV caches and intermediate activations to reconstruct the input prompt. This is directly an embedding inversion attack (recovering text from embedding vectors), and GELO defends against it by obfuscating those representations before they leave the TEE.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timedigital
Datasets
Llama-2 7B
Applications
llm inferenceprivacy-preserving inference on shared accelerators