defense 2026

Diffusion-Guided Adversarial Perturbation Injection for Generalizable Defense Against Facial Manipulations

Yue Li 1, Linying Xue 1, Kaiqing Lin 2, Hanyu Quan 1, Dongdong Lin 1, Hui Tian 1, Hongxia Wang 3, Bin Wang 4

0 citations

α

Published on arXiv

2604.01635

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves robust manipulation disruption in white-box settings and strong cross-model transferability in black-box settings against both GAN and diffusion-based deepfake generators

AEGIS

Novel technique introduced


Recent advances in GAN and diffusion models have significantly improved the realism and controllability of facial deepfake manipulation, raising serious concerns regarding privacy, security, and identity misuse. Proactive defenses attempt to counter this threat by injecting adversarial perturbations into images before manipulation takes place. However, existing approaches remain limited in effectiveness due to suboptimal perturbation injection strategies and are typically designed under white-box assumptions, targeting only simple GAN-based attribute editing. These constraints hinder their applicability in practical real-world scenarios. In this paper, we propose AEGIS, the first diffusion-guided paradigm in which the AdvErsarial facial images are Generated for Identity Shielding. We observe that the limited defense capability of existing approaches stems from the peak-clipping constraint, where perturbations are forcibly truncated due to a fixed $L_\infty$-bounded. To overcome this limitation, instead of directly modifying pixels, AEGIS injects adversarial perturbations into the latent space along the DDIM denoising trajectory, thereby decoupling the perturbation magnitude from pixel-level constraints and allowing perturbations to adaptively amplify where most effective. The extensible design of AEGIS allows the defense to be expanded from purely white-box use to also support black-box scenarios through a gradient-estimation strategy. Extensive experiments across GAN and diffusion-based deepfake generators show that AEGIS consistently delivers strong defense effectiveness while maintaining high perceptual quality. In white-box settings, it achieves robust manipulation disruption, whereas in black-box settings, it demonstrates strong cross-model transferability.


Key Contributions

  • First diffusion-guided adversarial defense framework (AEGIS) that injects perturbations in latent space along DDIM denoising trajectory, bypassing pixel-level peak-clipping constraints
  • Extensible design supporting both white-box and black-box scenarios through gradient-estimation strategy
  • Demonstrates strong defense effectiveness against both GAN and diffusion-based deepfake generators while maintaining high perceptual quality

🛡️ Threat Analysis

Input Manipulation Attack

AEGIS injects adversarial perturbations into facial images to disrupt deepfake generation models at inference time, causing manipulation failures—a defensive application of adversarial example techniques against generative models (GANs and diffusion models).


Details

Domains
visiongenerative
Model Types
diffusiongan
Threat Tags
white_boxblack_boxinference_time
Applications
facial manipulation preventiondeepfake defenseidentity protection