attack 2025

Unveiling the Attribute Misbinding Threat in Identity-Preserving Models

Junming Fu 1, Jishen Zeng 2, Yi Jiang 1, Peiyu Zhuang 1, Baoying Chen 2, Siyu Lu 1, Jianquan Yang 1,3

0 citations · 45 references · arXiv

α

Published on arXiv

2512.15818

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Misbinding Prompt evaluation set achieves 5.28% higher success rate in bypassing five leading text filters (including GPT-4o) compared to mainstream evaluation sets while producing the highest proportion of NSFW content

Attribute Misbinding Attack

Novel technique introduced


Identity-preserving models have led to notable progress in generating personalized content. Unfortunately, such models also exacerbate risks when misused, for instance, by generating threatening content targeting specific individuals. This paper introduces the \textbf{Attribute Misbinding Attack}, a novel method that poses a threat to identity-preserving models by inducing them to produce Not-Safe-For-Work (NSFW) content. The attack's core idea involves crafting benign-looking textual prompts to circumvent text-filter safeguards and leverage a key model vulnerability: flawed attribute binding that stems from its internal attention bias. This results in misattributing harmful descriptions to a target identity and generating NSFW outputs. To facilitate the study of this attack, we present the \textbf{Misbinding Prompt} evaluation set, which examines the content generation risks of current state-of-the-art identity-preserving models across four risk dimensions: pornography, violence, discrimination, and illegality. Additionally, we introduce the \textbf{Attribute Binding Safety Score (ABSS)}, a metric for concurrently assessing both content fidelity and safety compliance. Experimental results show that our Misbinding Prompt evaluation set achieves a \textbf{5.28}\% higher success rate in bypassing five leading text filters (including GPT-4o) compared to existing main-stream evaluation sets, while also demonstrating the highest proportion of NSFW content generation. The proposed ABSS metric enables a more comprehensive evaluation of identity-preserving models by concurrently assessing both content fidelity and safety compliance.


Key Contributions

  • Attribute Misbinding Attack: crafts benign-looking textual prompts that exploit diffusion model attention bias to misattribute harmful attributes to a target identity, bypassing text safety filters while generating NSFW content
  • Misbinding Prompt evaluation set: benchmark covering four risk dimensions (pornography, violence, discrimination, illegality) for testing identity-preserving models
  • Attribute Binding Safety Score (ABSS): MLLM-based metric for jointly assessing content fidelity and safety compliance in identity-preserving generation

🛡️ Threat Analysis

Input Manipulation Attack

The attack crafts adversarial textual inputs with specific syntactic/semantic configurations to exploit a concrete model vulnerability (flawed attribute binding via internal attention bias), causing safety filter evasion and harmful NSFW output generation at inference time in text-to-image diffusion models.


Details

Domains
visiongenerativemultimodal
Model Types
diffusion
Threat Tags
black_boxinference_timetargeted
Datasets
Misbinding Prompt (introduced by authors)
Applications
identity-preserving image generationtext-to-image synthesispersonalized portrait generation