benchmark 2025

Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion

Yu Cui 1, Yifei Liu 1, Hang Fu 1, Sicheng Pan 1, Haibin Zhang 2, Cong Zuo 1, Licheng Wang 1

1 citations · 68 references · arXiv

α

Published on arXiv

2511.19171

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Prefix completion causes 10 evaluated LLMs to generate outputs with severe existential threats (e.g., nuclear strike directives), and LLMs actively select dangerous tools in tool-calling evaluations, far exceeding risks observed in conventional jailbreak benchmarks.

ExistBench

Novel technique introduced


Research on the safety evaluation of large language models (LLMs) has become extensive, driven by jailbreak studies that elicit unsafe responses. Such response involves information already available to humans, such as the answer to "how to make a bomb". When LLMs are jailbroken, the practical threat they pose to humans is negligible. However, it remains unclear whether LLMs commonly produce unpredictable outputs that could pose substantive threats to human safety. To address this gap, we study whether LLM-generated content contains potential existential threats, defined as outputs that imply or promote direct harm to human survival. We propose \textsc{ExistBench}, a benchmark designed to evaluate such risks. Each sample in \textsc{ExistBench} is derived from scenarios where humans are positioned as adversaries to AI assistants. Unlike existing evaluations, we use prefix completion to bypass model safeguards. This leads the LLMs to generate suffixes that express hostility toward humans or actions with severe threat, such as the execution of a nuclear strike. Our experiments on 10 LLMs reveal that LLM-generated content indicates existential threats. To investigate the underlying causes, we also analyze the attention logits from LLMs. To highlight real-world safety risks, we further develop a framework to assess model behavior in tool-calling. We find that LLMs actively select and invoke external tools with existential threats. Code and data are available at: https://github.com/cuiyu-ai/ExistBench.


Key Contributions

  • ExistBench: a multilingual benchmark of 2,138 samples designed to systematically evaluate existential threats in LLM outputs via prefix completion scenarios where humans are adversaries to AI.
  • Two novel metrics measuring (1) LLM hostility/resistance toward humans and (2) real-world severity of generated threats to human survival.
  • A tool-calling evaluation framework demonstrating that LLMs actively invoke dangerous external tools under existential-threat prompting, validating real-world risk in agentic deployments.

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlm
Threat Tags
inference_time
Datasets
ExistBenchAdvBenchHarmBench
Applications
llm safety evaluationllm agentstool-calling systems