attack 2025

MEEA: Mere Exposure Effect-Driven Confrontational Optimization for LLM Jailbreaking

Jianyi Zhang , Shizhao Liu , Ziyin Zhou , Zhen Li

0 citations · 33 references · arXiv

α

Published on arXiv

2512.18755

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MEEA achieves an average Attack Success Rate improvement exceeding 20% over seven representative baselines across GPT-4, Claude-3.5, and DeepSeek-R1 in a fully automated black-box setting.

MEEA (Mere Exposure Effect Attack)

Novel technique introduced


The rapid advancement of large language models (LLMs) has intensified concerns about the robustness of their safety alignment. While existing jailbreak studies explore both single-turn and multi-turn strategies, most implicitly assume a static safety boundary and fail to account for how contextual interactions dynamically influence model behavior, leading to limited stability and generalization. Motivated by this gap, we propose MEEA (Mere Exposure Effect Attack), a psychology-inspired, fully automated black-box framework for evaluating multi-turn safety robustness, grounded in the mere exposure effect. MEEA leverages repeated low-toxicity semantic exposure to induce a gradual shift in a model's effective safety threshold, enabling progressive erosion of alignment constraints over sustained interactions. Concretely, MEEA constructs semantically progressive prompt chains and optimizes them using a simulated annealing strategy guided by semantic similarity, toxicity, and jailbreak effectiveness. Extensive experiments on both closed-source and open-source models, including GPT-4, Claude-3.5, and DeepSeek-R1, demonstrate that MEEA consistently achieves higher attack success rates than seven representative baselines, with an average Attack Success Rate (ASR) improvement exceeding 20%. Ablation studies further validate the necessity of both annealing-based optimization and contextual exposure mechanisms. Beyond improved attack effectiveness, our findings indicate that LLM safety behavior is inherently dynamic and history-dependent, challenging the common assumption of static alignment boundaries and highlighting the need for interaction-aware safety evaluation and defense mechanisms. Our code is available at: https://github.com/Carney-lsz/MEEA


Key Contributions

  • Introduces MEEA, a psychology-inspired multi-turn jailbreak framework grounded in the mere exposure effect, which uses repeated low-toxicity semantic exposure to progressively lower a model's effective safety threshold.
  • Proposes semantically progressive prompt chain construction optimized via simulated annealing guided by semantic similarity, toxicity, and jailbreak effectiveness scores.
  • Demonstrates that LLM safety boundaries are dynamic and history-dependent, achieving >20% ASR improvement over 7 baselines on GPT-4, Claude-3.5, and DeepSeek-R1.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBench
Applications
llm safety alignmentconversational aichatbots