defense 2026

AlienLM: Alienization of Language for API-Boundary Privacy in Black-Box LLMs

Jaehee Kim , Pilsung Kang

0 citations · 40 references · arXiv

α

Published on arXiv

2601.22710

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

AlienLM retains over 81% of plaintext-oracle LLM performance while limiting adversarial token reconstruction to fewer than 0.22% under strong recovery attacks including model-weight and corpus-statistics access.

AlienLM / Alien Adaptation Training (AAT)

Novel technique introduced


Modern LLMs are increasingly accessed via black-box APIs, requiring users to transmit sensitive prompts, outputs, and fine-tuning data to external providers, creating a critical privacy risk at the API boundary. We introduce AlienLM, a deployable API-only privacy layer that protects text by translating it into an Alien Language via a vocabulary-scale bijection, enabling lossless recovery on the client side. Using only standard fine-tuning APIs, Alien Adaptation Training (AAT) adapts target models to operate directly on alienized inputs. Across four LLM backbones and seven benchmarks, AlienLM retains over 81\% of plaintext-oracle performance on average, substantially outperforming random-bijection and character-level baselines. Under adversaries with access to model weights, corpus statistics, and learning-based inverse translation, recovery attacks reconstruct fewer than 0.22\% of alienized tokens. Our results demonstrate a practical pathway for privacy-preserving LLM deployment under API-only access, substantially reducing plaintext exposure while maintaining task performance.


Key Contributions

  • AlienLM: a vocabulary-scale bijection scheme that translates plaintext to 'Alien Language' before transmission to external LLM APIs, enabling lossless client-side recovery while hiding plaintext from providers
  • Alien Adaptation Training (AAT): a fine-tuning procedure using only standard API access to adapt target models to operate on alienized inputs, retaining >81% of plaintext-oracle performance across 4 backbones and 7 benchmarks
  • Empirical evaluation of recovery attacks (model-weight-based, corpus-statistics-based, learning-based inverse translation) showing <0.22% token reconstruction success under strong adversary assumptions

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetraining_time
Datasets
7 NLP benchmarks (unspecified in available text)
Applications
llm api privacyprivacy-preserving llm inferenceprivacy-preserving llm fine-tuning