survey 2025

Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems

Zhongliang Guo , Yifei Qian , Yanli Li , Weiye Li , Chun Tong Lei , Shuai Zhao , Lei Fang , Ognjen Arandjelović , Chun Pong Lau

0 citations

α

Published on arXiv

2508.01845

Input Manipulation Attack

OWASP ML Top 10 — ML01

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Adversarial techniques serve a dual role as threats and defenses in computer vision, with critical research gaps identified in neural style transfer protection and efficient physically realizable attacks.


Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security. This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools. We provide a systematic analysis of adversarial attack methodologies across three primary domains: pixel-space attacks, physically realizable attacks, and latent-space attacks. Our investigation traces the technical evolution from early gradient-based methods such as FGSM and PGD to sophisticated optimization techniques incorporating momentum, adaptive step sizes, and advanced transferability mechanisms. We examine how physically realizable attacks have successfully bridged the gap between digital vulnerabilities and real-world threats through adversarial patches, 3D textures, and dynamic optical perturbations. Additionally, we explore the emergence of latent-space attacks that leverage semantic structure in internal representations to create more transferable and meaningful adversarial examples. Beyond traditional offensive applications, we investigate the constructive use of adversarial techniques for vulnerability assessment in biometric authentication systems and protection against malicious generative models. Our analysis reveals critical research gaps, particularly in neural style transfer protection and computational efficiency requirements. This survey contributes a comprehensive taxonomy, evolution analysis, and identification of future research directions, aiming to advance understanding of adversarial vulnerabilities and inform the development of more robust and trustworthy computer vision systems.


Key Contributions

  • Systematic taxonomy of adversarial attack methodologies across three domains: pixel-space, physically realizable, and latent-space attacks
  • Analysis of the dual-use nature of adversarial techniques as both security threats and defensive tools (e.g., biometric vulnerability assessment, generative model protection)
  • Identification of critical research gaps including neural style transfer protection and computational efficiency requirements

🛡️ Threat Analysis

Input Manipulation Attack

The core topic of the survey is adversarial attacks on computer vision systems — gradient-based methods (FGSM, PGD), physically realizable attacks (adversarial patches, 3D textures), and latent-space attacks that manipulate model inputs to cause misclassification or manipulation at inference time.

Output Integrity Attack

The survey explicitly covers the constructive/defensive use of adversarial techniques for protecting against malicious generative models and neural style transfer attacks — framing adversarial perturbations as content integrity protections, which falls under output integrity and content provenance.


Details

Domains
vision
Model Types
cnntransformergandiffusion
Threat Tags
white_boxblack_boxinference_timedigitalphysical
Applications
image classificationbiometric authenticationneural style transfergenerative model protection