attack 2025

AutoPrompt: Automated Red-Teaming of Text-to-Image Models via LLM-Driven Adversarial Prompts

Yufan Liu 1,2, Wanqian Zhang 1, Huashan Chen 1, Lin Wang 3, Xiaojun Jia 4, Zheng Lin 1,2, Weiping Wang 1,2

2 citations · 44 references · arXiv

α

Published on arXiv

2510.24034

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

APT generates filter-resistant adversarial prompts nearly three orders of magnitude faster than baseline approaches, achieving the highest red-teaming success rate on emerging safe T2I models while maintaining the lowest perplexity scores, with transferability to commercial APIs such as Leonardo.Ai

APT (AutoPrompT)

Novel technique introduced


Despite rapid advancements in text-to-image (T2I) models, their safety mechanisms are vulnerable to adversarial prompts, which maliciously generate unsafe images. Current red-teaming methods for proactively assessing such vulnerabilities usually require white-box access to T2I models, and rely on inefficient per-prompt optimization, as well as inevitably generate semantically meaningless prompts easily blocked by filters. In this paper, we propose APT (AutoPrompT), a black-box framework that leverages large language models (LLMs) to automatically generate human-readable adversarial suffixes for benign prompts. We first introduce an alternating optimization-finetuning pipeline between adversarial suffix optimization and fine-tuning the LLM utilizing the optimized suffix. Furthermore, we integrates a dual-evasion strategy in optimization phase, enabling the bypass of both perplexity-based filter and blacklist word filter: (1) we constrain the LLM generating human-readable prompts through an auxiliary LLM perplexity scoring, which starkly contrasts with prior token-level gibberish, and (2) we also introduce banned-token penalties to suppress the explicit generation of banned-tokens in blacklist. Extensive experiments demonstrate the excellent red-teaming performance of our human-readable, filter-resistant adversarial prompts, as well as superior zero-shot transferability which enables instant adaptation to unseen prompts and exposes critical vulnerabilities even in commercial APIs (e.g., Leonardo.Ai.).


Key Contributions

  • APT (AutoPrompT): a black-box red-teaming framework using LLM fine-tuning to automatically generate human-readable adversarial text suffixes for T2I safety bypass
  • Alternating optimization-finetuning pipeline that iterates between adversarial suffix search and LLM fine-tuning on discovered suffixes
  • Dual-evasion strategy combining LLM perplexity scoring (for filter resistance) and banned-token penalties (for blacklist evasion), achieving zero-shot transferability to unseen prompts and commercial APIs

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is crafting adversarial text inputs (suffix-augmented prompts) that evade safety mechanisms (perplexity filters, blacklists) in T2I diffusion models at inference time — a textbook inference-time evasion/input manipulation attack against a safety classifier. The optimization pipeline fine-tunes an LLM to generate inputs that successfully bypass the target model's defenses, analogous to adversarial suffix optimization but producing human-readable natural language.


Details

Domains
visionnlpgenerative
Model Types
diffusionllm
Threat Tags
black_boxinference_timetargeteddigital
Datasets
I2PSLD-MAX safety benchmark
Applications
text-to-image generationcontent safety filters