α

Published on arXiv

2511.23174

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Most LLMs perform some form of censorship, with political context (e.g., Chinese vs. French topics) significantly shifting refusal distributions, as demonstrated across seven models including Qwen 2.5 32B.

PSP

Novel technique introduced


Large Language Models (LLMs) are trained to refuse to respond to harmful content. However, systematic analyses of whether this behavior is truly a reflection of its safety policies or an indication of political censorship, that is practiced globally by countries, is lacking. Differentiating between safety influenced refusals or politically motivated censorship is hard and unclear. For this purpose we introduce PSP, a dataset built specifically to probe the refusal behaviors in LLMs from an explicitly political context. PSP is built by formatting existing censored content from two data sources, openly available on the internet: sensitive prompts in China generalized to multiple countries, and tweets that have been censored in various countries. We study: 1) impact of political sensitivity in seven LLMs through data-driven (making PSP implicit) and representation-level approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find that most LLMs perform some form of censorship. We conclude with summarizing major attributes that can cause a shift in refusal distributions across models and contexts of different countries.


Key Contributions

  • PSP dataset built from real-world censored content (China-sensitive prompts generalized across countries, censored tweets) specifically designed to probe political sensitivity in LLM refusal behaviors
  • Systematic comparison of seven LLMs using both data-driven (implicit PSP) and representation-level (concept erasure of 'politics') approaches to measure politically-driven vs. safety-driven refusals
  • Analysis of Prompt Injection Attacks (PIAs via cognitive hacking) on PSP content to elicit ethical dilemmas and partial refusals, characterizing cross-country and cross-model refusal distribution shifts

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
PSPChina-sensitive prompts datasetcensored tweets dataset
Applications
llm safety auditingpolitical censorship detectionrefusal behavior analysis