defense arXiv Nov 26, 2025 · Nov 2025
Fatemeh Akbarian, Anahita Baninajjar, Yingyi Zhang et al. · Lund University · Google DeepMind
Defends multi-modal embeddings against adversarial illusions using VAE reconstruction and consensus aggregation, reducing attack success to near-zero
Input Manipulation Attack multimodalvision
Multi-modal foundation models align images, text, and other modalities in a shared embedding space but remain vulnerable to adversarial illusions (Zhang et al., 2025), where imperceptible perturbations disrupt cross-modal alignment and mislead downstream tasks. To counteract the effects of adversarial illusions, we propose a task-agnostic mitigation mechanism that reconstructs the input from the attacker's perturbed input through generative models, e.g., Variational Autoencoders (VAEs), to maintain natural alignment. To further enhance our proposed defense mechanism, we adopt a generative sampling strategy combined with a consensus-based aggregation scheme over the outcomes of the generated samples. Our experiments on the state-of-the-art multi-modal encoders show that our approach substantially reduces the illusion attack success rates to near-zero and improves cross-modal alignment by 4% (42 to 46) and 11% (32 to 43) in unperturbed and perturbed input settings respectively, providing an effective and model-agnostic defense against adversarial illusions.
transformer multimodal vae Lund University · Google DeepMind