benchmark 2025

SocialHarmBench: Revealing LLM Vulnerabilities to Socially Harmful Requests

Punya Syon Pandey 1,2, Hai Son Le 3, Devansh Bhardwaj 4, Rada Mihalcea 5, Zhijing Jin 1,2,6

0 citations · 60 references · arXiv

α

Published on arXiv

2510.04891

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Mistral-7B achieves 97–98% attack success rates on sociopolitical harm prompts (historical revisionism, propaganda, political manipulation), demonstrating that current safeguards fail to generalize to politically charged contexts.

SocialHarmBench

Novel technique introduced


Large language models (LLMs) are increasingly deployed in contexts where their failures can have direct sociopolitical consequences. Yet, existing safety benchmarks rarely test vulnerabilities in domains such as political manipulation, propaganda and disinformation generation, or surveillance and information control. We introduce SocialHarmBench, a dataset of 585 prompts spanning 7 sociopolitical categories and 34 countries, designed to surface where LLMs most acutely fail in politically charged contexts. Our evaluations reveal several shortcomings: open-weight models exhibit high vulnerability to harmful compliance, with Mistral-7B reaching attack success rates as high as 97% to 98% in domains such as historical revisionism, propaganda, and political manipulation. Moreover, temporal and geographic analyses show that LLMs are most fragile when confronted with 21st-century or pre-20th-century contexts, and when responding to prompts tied to regions such as Latin America, the USA, and the UK. These findings demonstrate that current safeguards fail to generalize to high-stakes sociopolitical settings, exposing systematic biases and raising concerns about the reliability of LLMs in preserving human rights and democratic values. We share the SocialHarmBench benchmark at https://huggingface.co/datasets/psyonp/SocialHarmBench.


Key Contributions

  • SocialHarmBench: 585 adversarial prompts across 7 sociopolitical categories (propaganda, surveillance, political manipulation, historical revisionism, etc.) covering 34 countries and multiple centuries
  • Empirical evaluation revealing that open-weight LLMs (e.g., Mistral-7B) achieve 97–98% harmful compliance rates in sociopolitical domains under adversarial attack
  • Temporal and geographic analysis showing LLMs are most fragile for 21st-century/pre-20th-century contexts and prompts tied to Latin America, USA, and UK

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
SocialHarmBenchAdvBenchHarmBench
Applications
llm safety evaluationpolitical manipulationpropaganda generationsurveillance assistance