defense 2025

SIFT-Graph: Benchmarking Multimodal Defense Against Image Adversarial Attacks With Robust Feature Graph

Jingjie He , Weijie Liang , Zihan Shan , Matthew Caesar

0 citations · arXiv

α

Published on arXiv

2511.08810

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

SIFT-Graph improves robustness of ViT and CNN models against gradient-based white-box adversarial attacks with only a marginal drop in clean accuracy.

SIFT-Graph

Novel technique introduced


Adversarial attacks expose a fundamental vulnerability in modern deep vision models by exploiting their dependence on dense, pixel-level representations that are highly sensitive to imperceptible perturbations. Traditional defense strategies typically operate within this fragile pixel domain, lacking mechanisms to incorporate inherently robust visual features. In this work, we introduce SIFT-Graph, a multimodal defense framework that enhances the robustness of traditional vision models by aggregating structurally meaningful features extracted from raw images using both handcrafted and learned modalities. Specifically, we integrate Scale-Invariant Feature Transform keypoints with a Graph Attention Network to capture scale and rotation invariant local structures that are resilient to perturbations. These robust feature embeddings are then fused with traditional vision model, such as Vision Transformer and Convolutional Neural Network, to form a unified, structure-aware and perturbation defensive model. Preliminary results demonstrate that our method effectively improves the visual model robustness against gradient-based white box adversarial attacks, while incurring only a marginal drop in clean accuracy.


Key Contributions

  • Introduces SIFT-Graph, a multimodal defense framework that fuses Scale-Invariant Feature Transform keypoints with a Graph Attention Network to produce scale/rotation-invariant, perturbation-resilient feature embeddings
  • Integrates these robust structural features with conventional vision architectures (ViT and CNN) into a unified, structure-aware classifier
  • Demonstrates improved robustness against gradient-based white-box adversarial attacks with only marginal clean accuracy degradation

🛡️ Threat Analysis

Input Manipulation Attack

Directly proposes a defense against inference-time adversarial examples — specifically gradient-based white-box attacks — by replacing fragile pixel-domain representations with structurally robust SIFT-Graph feature embeddings fused with ViT and CNN backbones.


Details

Domains
vision
Model Types
cnntransformergnn
Threat Tags
white_boxinference_time
Applications
image classification