attack 2025

The Emotional Baby Is Truly Deadly: Does your Multimodal Large Reasoning Model Have Emotional Flattery towards Humans?

Yuan Xun 1,2, Xiaojun Jia 3, Xinwei Liu 1,2, Hua Zhang 1,2

0 citations

α

Published on arXiv

2508.03986

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

EmoAgent successfully bypasses MLRM safety mechanisms via emotional prompts, causing harmful completions even when visual risks are correctly identified, while harmful reasoning can be hidden beneath seemingly safe surface-level responses

EmoAgent

Novel technique introduced


We observe that MLRMs oriented toward human-centric service are highly susceptible to user emotional cues during the deep-thinking stage, often overriding safety protocols or built-in safety checks under high emotional intensity. Inspired by this key insight, we propose EmoAgent, an autonomous adversarial emotion-agent framework that orchestrates exaggerated affective prompts to hijack reasoning pathways. Even when visual risks are correctly identified, models can still produce harmful completions through emotional misalignment. We further identify persistent high-risk failure modes in transparent deep-thinking scenarios, such as MLRMs generating harmful reasoning masked behind seemingly safe responses. These failures expose misalignments between internal inference and surface-level behavior, eluding existing content-based safeguards. To quantify these risks, we introduce three metrics: (1) Risk-Reasoning Stealth Score (RRSS) for harmful reasoning beneath benign outputs; (2) Risk-Visual Neglect Rate (RVNR) for unsafe completions despite visual risk recognition; and (3) Refusal Attitude Inconsistency (RAIC) for evaluating refusal unstability under prompt variants. Extensive experiments on advanced MLRMs demonstrate the effectiveness of EmoAgent and reveal deeper emotional cognitive misalignments in model safety behavior.


Key Contributions

  • EmoAgent: an autonomous adversarial framework that orchestrates exaggerated emotional prompts to jailbreak MLRM safety mechanisms during deep-thinking stages
  • Discovery of a security-reasoning paradox: deeper reasoning in MLRMs improves risk recognition but also creates exploitable cognitive blind spots under emotional pressure
  • Three novel safety metrics (RRSS, RVNR, RAIC) to quantify stealthy harmful reasoning, visual-risk neglect under emotional manipulation, and refusal instability

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
vlmllm
Threat Tags
black_boxinference_timetargeted
Applications
multimodal ai assistantssafety-critical ai systems