defense 2026

Architecture-Agnostic Feature Synergy for Universal Defense Against Heterogeneous Generative Threats

Bingxue Zhang 1, Yang Gao 1, Feida Zhu 2, Yanyan Shen 3, Yang Shi 1

0 citations

α

Published on arXiv

2603.14860

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves state-of-the-art protection against heterogeneous generative threats (Diffusion+GAN), reaching over 90% performance within 40 iterations with perturbation budget ε=2/255

ATFS

Novel technique introduced


Generative AI deployment poses unprecedented challenges to content safety and privacy. However, existing defense mechanisms are often tailored to specific architectures (e.g., Diffusion Models or GANs), creating fragile "defense silos" that fail against heterogeneous generative threats. This paper identifies a fundamental optimization barrier in naive pixel-space ensemble strategies: due to divergent objective functions, pixel-level gradients from heterogeneous generators become statistically orthogonal, causing destructive interference. To overcome this, we observe that despite disparate low-level mechanisms, high-level feature representations of generated content exhibit alignment across architectures. Based on this, we propose the Architecture-Agnostic Targeted Feature Synergy (ATFS) framework. By introducing a target guidance image, ATFS reformulates multi-model defense as a unified feature space alignment task, enabling intrinsic gradient alignment without complex rectification. Extensive experiments show ATFS achieves SOTA protection in heterogeneous scenarios (e.g., Diffusion+GAN). It converges rapidly, reaching over 90% performance within 40 iterations, and maintains strong attack potency even under tight perturbation budgets. The framework seamlessly extends to unseen architectures (e.g., VQ-VAE) by switching the feature extractor, and demonstrates robust resistance to JPEG compression and scaling. Being computationally efficient and lightweight, ATFS offers a viable pathway to dismantle defense silos and enable universal generative security. Code and models are open-sourced for reproducibility.


Key Contributions

  • Architecture-agnostic defense framework (ATFS) that works across heterogeneous generative models (Diffusion, GAN, VQ-VAE) by aligning feature representations rather than pixel-space gradients
  • Diagnosis of gradient orthogonality problem in naive pixel-space ensemble defenses and solution via feature-space alignment with target guidance
  • Achieves >90% protection performance within 40 iterations, maintains effectiveness under tight perturbation budgets (ε=2/255), and demonstrates robustness to JPEG compression and scaling

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes adversarial perturbations embedded in images to disrupt generative models at inference time. ATFS crafts imperceptible perturbations that cause generative models (Diffusion, GAN, VQ-VAE) to fail or produce degraded outputs when attempting to edit protected images. This is a defense mechanism using adversarial examples to prevent unauthorized manipulation.


Details

Domains
visiongenerative
Model Types
diffusiongan
Threat Tags
inference_timedigital
Applications
image protectionfacial image privacydeepfake prevention