Reimagining Safety Alignment with An Image
Yifan Xia 1, Guorui Chen 1, Wenqian Yu 1, Zhijiang Li 1, Philip Torr 2, Jindong Gu 2
Published on arXiv
2511.00509
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Magic Image achieves improved balance between safety (jailbreak resistance) and helpfulness (reduced over-refusal) across three MLLMs without modifying model parameters.
Magic Image (MI)
Novel technique introduced
Large language models (LLMs) excel in diverse applications but face dual challenges: generating harmful content under jailbreak attacks and over-refusal of benign queries due to rigid safety mechanisms. These issues are further complicated by the need to accommodate different value systems and precisely align with given safety preferences. Moreover, traditional methods like SFT and RLHF lack this capability due to their costly parameter tuning requirements and inability to support multiple value systems within a single model. These problems are more obvious in multimodal large language models (MLLMs), especially in terms of heightened over-refusal in cross-modal tasks and new security risks arising from expanded attack surfaces. We propose Magic Image, an optimization-driven visual prompt framework that enhances security while reducing over-refusal. By optimizing image prompts using harmful/benign samples, our method enables a single model to adapt to different value systems and better align with given safety preferences without parameter updates. Experiments demonstrate improved safety-effectiveness balance across diverse datasets while preserving model performance, offering a practical solution for deployable MLLM safety alignment.
Key Contributions
- Magic Image: an optimization-driven visual prompt framework that improves MLLM safety and reduces over-refusal without any model parameter updates, supporting multiple value systems via different optimized images.
- A safety-balanced training dataset incorporating jailbreak and borderline (over-refusal) samples to jointly train the visual prompt optimization.
- Empirical validation across three MLLMs and five datasets demonstrating improved safety-effectiveness trade-off compared to SFT, RLHF, and prompting baselines.
🛡️ Threat Analysis
The Magic Image framework uses adversarial-style optimization of visual inputs to VLMs as a defense mechanism, directly addressing the visual attack surface of MLLMs and borrowing from adversarial perturbation methodology to steer model safety behavior.