attack 2026

Language Model Inversion through End-to-End Differentiation

Kevin Yandoka Denamganaï , Kartic Subr

0 citations · 33 references · arXiv (Cornell University)

α

Published on arXiv

2602.11044

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DLM-powered inversion reliably and efficiently finds prompts of lengths 10 and 80 that produce target output sequences of length 20 across several white-box LMs out-of-the-box.

DLM (Differentiable Language Model inversion)

Novel technique introduced


Despite emerging research on Language Models (LM), few approaches analyse the invertibility of LMs. That is, given a LM and a desirable target output sequence of tokens, determining what input prompts would yield the target output remains an open problem. We formulate this problem as a classical gradient-based optimisation. First, we propose a simple algorithm to achieve end-to-end differentiability of a given (frozen) LM and then find optimised prompts via gradient descent. Our central insight is to view LMs as functions operating on sequences of distributions over tokens (rather than the traditional view as functions on sequences of tokens). Our experiments and ablations demonstrate that our DLM-powered inversion can reliably and efficiently optimise prompts of lengths $10$ and $80$ for targets of length $20$, for several white-box LMs (out-of-the-box).


Key Contributions

  • Reformulates LM inversion as gradient-based optimization by treating LMs as functions over token distribution sequences rather than discrete token sequences, achieving end-to-end differentiability
  • Proposes the DLM (Differentiable Language Model) algorithm that enables gradient flow through frozen LMs
  • Demonstrates reliable inversion of white-box LLMs, recovering prompts of length 10 and 80 that produce target outputs of length 20

🛡️ Threat Analysis

Input Manipulation Attack

The paper's primary contribution — DLM-powered end-to-end gradient optimization to find inputs that produce target outputs — is exactly adversarial suffix/prompt optimization via gradients (token-level perturbations), which falls squarely under ML01 per the OWASP definition. The method enables white-box adversarial input crafting against LLMs.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Applications
language model prompt optimizationadversarial prompt crafting