defense 2025

Breaking the Illusion: Consensus-Based Generative Mitigation of Adversarial Illusions in Multi-Modal Embeddings

Fatemeh Akbarian 1, Anahita Baninajjar 1, Yingyi Zhang 1, Ananth Balashankar 2, Amir Aminifar 1

0 citations · 35 references · arXiv

α

Published on arXiv

2511.21893

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Reduces adversarial illusion attack success rate from 62%/90% to 0%/2% (Top-1/Top-5) and restores cross-modal alignment from 32% to 43%, outperforming prior state-of-the-art defenses.

Consensus-Based Generative Mitigation

Novel technique introduced


Multi-modal foundation models align images, text, and other modalities in a shared embedding space but remain vulnerable to adversarial illusions (Zhang et al., 2025), where imperceptible perturbations disrupt cross-modal alignment and mislead downstream tasks. To counteract the effects of adversarial illusions, we propose a task-agnostic mitigation mechanism that reconstructs the input from the attacker's perturbed input through generative models, e.g., Variational Autoencoders (VAEs), to maintain natural alignment. To further enhance our proposed defense mechanism, we adopt a generative sampling strategy combined with a consensus-based aggregation scheme over the outcomes of the generated samples. Our experiments on the state-of-the-art multi-modal encoders show that our approach substantially reduces the illusion attack success rates to near-zero and improves cross-modal alignment by 4% (42 to 46) and 11% (32 to 43) in unperturbed and perturbed input settings respectively, providing an effective and model-agnostic defense against adversarial illusions.


Key Contributions

  • Post-hoc, task-agnostic mitigation mechanism using generative models (VAEs, AEs, DMs) to reconstruct adversarially perturbed inputs back onto the natural data manifold
  • Consensus-based aggregation scheme over multiple stochastic generative samples to enhance robustness against adversarial illusions
  • Reduces adversarial illusion attack success rate from 62%/90% (Top-1/Top-5) to 0%/2% while improving cross-modal alignment from 32% to 43% in perturbed settings

🛡️ Threat Analysis

Input Manipulation Attack

Adversarial illusions are imperceptible gradient-based perturbations applied at inference time to disrupt cross-modal alignment in shared embedding spaces (CLIP, ALIGN, ImageBind). The paper's primary contribution is a defense (generative reconstruction + consensus aggregation) against this input manipulation attack.


Details

Domains
multimodalvision
Model Types
transformermultimodalvae
Threat Tags
white_boxinference_timetargeteddigital
Datasets
ImageNet
Applications
multi-modal retrievalzero-shot image classificationcross-modal alignment