Patronus: Safeguarding Text-to-Image Models against White-Box Adversaries
Xinfeng Li 1,2, Shengyuan Pang 2, Jialin Wu 2, Jiangyi Deng 2, Huanlong Zhong 2, Yanjiao Chen 2, Jie Zhang 3, Wenyuan Xu 2
Published on arXiv
2510.16581
Transfer Learning Attack
OWASP ML Top 10 — ML07
Key Finding
Patronus maintains safe content generation performance while remaining resilient against various white-box fine-tuning attacks that bypass standard safety measures in T2I models.
Patronus
Novel technique introduced
Text-to-image (T2I) models, though exhibiting remarkable creativity in image generation, can be exploited to produce unsafe images. Existing safety measures, e.g., content moderation or model alignment, fail in the presence of white-box adversaries who know and can adjust model parameters, e.g., by fine-tuning. This paper presents a novel defensive framework, named Patronus, which equips T2I models with holistic protection to defend against white-box adversaries. Specifically, we design an internal moderator that decodes unsafe input features into zero vectors while ensuring the decoding performance of benign input features. Furthermore, we strengthen the model alignment with a carefully designed non-fine-tunable learning mechanism, ensuring the T2I model will not be compromised by malicious fine-tuning. We conduct extensive experiments to validate the intactness of the performance on safe content generation and the effectiveness of rejecting unsafe content generation. Results also confirm the resilience of Patronus against various fine-tuning attacks by white-box adversaries.
Key Contributions
- Internal moderator that projects unsafe input features to zero vectors while preserving benign feature decoding performance
- Non-fine-tunable learning mechanism that makes safety alignment resistant to malicious fine-tuning by white-box adversaries
- Holistic defensive framework (Patronus) validated against multiple fine-tuning attack strategies on T2I models
🛡️ Threat Analysis
The primary adversarial threat is a white-box adversary who fine-tunes the T2I model to circumvent safety alignment — a classic transfer learning attack. The core defense contribution (non-fine-tunable learning mechanism) directly addresses backdoors/safety bypasses that exploit the fine-tuning process, which is the defining characteristic of ML07.