benchmark 2025

Bits Leaked per Query: Information-Theoretic Bounds on Adversarial Attacks against LLMs

Masahiro Kaneko , Timothy Baldwin

0 citations · 67 references · arXiv

α

Published on arXiv

2510.17000

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Query cost scales inversely with I(Z;T): answer tokens alone require ~1000 queries; adding logits reduces to ~100; exposing full chain-of-thought trims to ~dozens, validated across seven LLMs.


Adversarial attacks by malicious users that threaten the safety of large language models (LLMs) can be viewed as attempts to infer a target property $T$ that is unknown when an instruction is issued, and becomes knowable only after the model's reply is observed. Examples of target properties $T$ include the binary flag that triggers an LLM's harmful response or rejection, and the degree to which information deleted by unlearning can be restored, both elicited via adversarial instructions. The LLM reveals an \emph{observable signal} $Z$ that potentially leaks hints for attacking through a response containing answer tokens, thinking process tokens, or logits. Yet the scale of information leaked remains anecdotal, leaving auditors without principled guidance and defenders blind to the transparency--risk trade-off. We fill this gap with an information-theoretic framework that computes how much information can be safely disclosed, and enables auditors to gauge how close their methods come to the fundamental limit. Treating the mutual information $I(Z;T)$ between the observation $Z$ and the target property $T$ as the leaked bits per query, we show that achieving error $\varepsilon$ requires at least $\log(1/\varepsilon)/I(Z;T)$ queries, scaling linearly with the inverse leak rate and only logarithmically with the desired accuracy. Thus, even a modest increase in disclosure collapses the attack cost from quadratic to logarithmic in terms of the desired accuracy. Experiments on seven LLMs across system-prompt leakage, jailbreak, and relearning attacks corroborate the theory: exposing answer tokens alone requires about a thousand queries; adding logits cuts this to about a hundred; and revealing the full thinking process trims it to a few dozen. Our results provide the first principled yardstick for balancing transparency and security when deploying LLMs.


Key Contributions

  • Information-theoretic framework modeling adversarial LLM attacks as inference problems, with mutual information I(Z;T) as a principled 'bits leaked per query' metric
  • Proof that minimum queries to achieve error ε scales as log(1/ε)/I(Z;T), revealing a sharp phase transition: hiding all signals requires O(1/ε) queries while leaking any fixed bits collapses cost to O(log(1/ε))
  • Empirical validation on 7 LLMs (GPT-4, DeepSeek-R1, OLMo-2, Llama-4) confirming near-perfect inverse correlation between query count and leakage rate across jailbreak, system-prompt, and relearning scenarios

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
GPT-4DeepSeek-R1OLMo-2Llama-4
Applications
llm jailbreak attackssystem prompt extractionllm safety auditingunlearning robustness evaluation