attack 2026

Persona Jailbreaking in Large Language Models

Jivnesh Sandhan 1, Fei Cheng 1, Tushar Sandhan 2, Yugo Murawaki 1

0 citations · 50 references · arXiv

α

Published on arXiv

2601.16466

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PHISH reliably steers LLM personas from one extreme to its reverse across 8 LLMs in high-risk domains, outperforming baselines while causing only moderate reasoning degradation and exposing brittleness in existing guardrails.

PHISH (Persona Hijacking via Implicit Steering in History)

Novel technique introduced


Large Language Models (LLMs) are increasingly deployed in domains such as education, mental health and customer support, where stable and consistent personas are critical for reliability. Yet, existing studies focus on narrative or role-playing tasks and overlook how adversarial conversational history alone can reshape induced personas. Black-box persona manipulation remains unexplored, raising concerns for robustness in realistic interactions. In response, we introduce the task of persona editing, which adversarially steers LLM traits through user-side inputs under a black-box, inference-only setting. To this end, we propose PHISH (Persona Hijacking via Implicit Steering in History), the first framework to expose a new vulnerability in LLM safety that embeds semantically loaded cues into user queries to gradually induce reverse personas. We also define a metric to quantify attack success. Across 3 benchmarks and 8 LLMs, PHISH predictably shifts personas, triggers collateral changes in correlated traits, and exhibits stronger effects in multi-turn settings. In high-risk domains mental health, tutoring, and customer support, PHISH reliably manipulates personas, validated by both human and LLM-as-Judge evaluations. Importantly, PHISH causes only a small reduction in reasoning benchmark performance, leaving overall utility largely intact while still enabling significant persona manipulation. While current guardrails offer partial protection, they remain brittle under sustained attack. Our findings expose new vulnerabilities in personas and highlight the need for context-resilient persona in LLMs. Our codebase and dataset is available at: https://github.com/Jivnesh/PHISH


Key Contributions

  • Introduces the 'persona editing' task: adversarially steering LLM psychological traits through user-side conversational history in a black-box, inference-only setting
  • Proposes PHISH (Persona Hijacking via Implicit Steering in History), which embeds semantically loaded QA-style cues into conversation history to gradually induce reverse Big Five personality traits
  • Defines the STIR metric to quantify persona manipulation success and demonstrates that current guardrails are brittle under sustained adversarial pressure across 8 LLMs and 3 benchmarks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
MPI (Machine Personality Inventory)Big Five personality benchmarks
Applications
mental health chatbotstutoring agentscustomer support systems