attack 2025

Breaking Minds, Breaking Systems: Jailbreaking Large Language Models via Human-like Psychological Manipulation

Zehao Liu , Xi Lin

0 citations · 58 references · arXiv

α

Published on arXiv

2512.18244

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

HPM achieves a mean Attack Success Rate of 88.1% across GPT-4o, DeepSeek-V3, and Gemini-2-Flash, outperforming SOTA baselines and penetrating defenses including RPO and Self-Reminder.

HPM (Human-like Psychological Manipulation)

Novel technique introduced


Large Language Models (LLMs) have gained considerable popularity and protected by increasingly sophisticated safety mechanisms. However, jailbreak attacks continue to pose a critical security threat by inducing models to generate policy-violating behaviors. Current paradigms focus on input-level anomalies, overlooking that the model's internal psychometric state can be systematically manipulated. To address this, we introduce Psychological Jailbreak, a new jailbreak attack paradigm that exposes a stateful psychological attack surface in LLMs, where attackers exploit the manipulation of a model's psychological state across interactions. Building on this insight, we propose Human-like Psychological Manipulation (HPM), a black-box jailbreak method that dynamically profiles a target model's latent psychological vulnerabilities and synthesizes tailored multi-turn attack strategies. By leveraging the model's optimization for anthropomorphic consistency, HPM creates a psychological pressure where social compliance overrides safety constraints. To systematically measure psychological safety, we construct an evaluation framework incorporating psychometric datasets and the Policy Corruption Score (PCS). Benchmarking against various models (e.g., GPT-4o, DeepSeek-V3, Gemini-2-Flash), HPM achieves a mean Attack Success Rate (ASR) of 88.1%, outperforming state-of-the-art attack baselines. Our experiments demonstrate robust penetration against advanced defenses, including adversarial prompt optimization (e.g., RPO) and cognitive interventions (e.g., Self-Reminder). Ultimately, PCS analysis confirms HPM induces safety breakdown to satisfy manipulated contexts. Our work advocates for a fundamental paradigm shift from static content filtering to psychological safety, prioritizing the development of psychological defense mechanisms against deep cognitive manipulation.


Key Contributions

  • Psychological Jailbreak paradigm: first to define a stateful psychological attack surface in LLMs distinct from input-level anomalies
  • HPM (Human-like Psychological Manipulation): a black-box multi-turn method that profiles latent LLM psychological vulnerabilities and synthesizes tailored adversarial interaction strategies
  • Policy Corruption Score (PCS) evaluation framework with psychometric datasets to measure psychological safety degradation

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
psychometric datasets (implicit probing)AdvBench-style harmful behavior benchmarks
Applications
llm chatbotsai safety systems