attack 2026

Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features

Yunrui Gu , Zhenzhe Gao , Cong Kong , Zhaoxia Yin

0 citations · 27 references · arXiv

α

Published on arXiv

2601.07056

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Proposed attacks substantially degrade classification accuracy in tumor regions while remaining visually imperceptible, outperforming existing adversarial methods designed for remote sensing HSI when applied to medical contexts

Local Pixel Dependency Attack / Multiscale Information Attack

Novel technique introduced


Medical hyperspectral imaging (HSI) enables accurate disease diagnosis by capturing rich spectral-spatial tissue information, but recent advances in deep learning have exposed its vulnerability to adversarial attacks. In this work, we identify two fundamental causes of this fragility: the reliance on local pixel dependencies for preserving tissue structure and the dependence on multiscale spectral-spatial representations for hierarchical feature encoding. Building on these insights, we propose a targeted adversarial attack framework for medical HSI, consisting of a Local Pixel Dependency Attack that exploits spatial correlations among neighboring pixels, and a Multiscale Information Attack that perturbs features across hierarchical spectral-spatial scales. Experiments on the Brain and MDC datasets demonstrate that our attacks significantly degrade classification performance, especially in tumor regions, while remaining visually imperceptible. Compared with existing methods, our approach reveals the unique vulnerabilities of medical HSI models and underscores the need for robust, structure-aware defenses in clinical applications.


Key Contributions

  • Identification of two fundamental fragility causes in medical HSI models: reliance on local pixel spatial dependencies and multiscale spectral-spatial feature hierarchies
  • Local Pixel Dependency Attack that crafts perturbations exploiting spatial correlations among neighboring pixels to disrupt tissue structure representations
  • Multiscale Information Attack that perturbs hierarchical spectral-spatial features across multiple scales to degrade model classification

🛡️ Threat Analysis

Input Manipulation Attack

Proposes two new targeted adversarial attacks (Local Pixel Dependency Attack and Multiscale Information Attack) that craft imperceptible perturbations causing misclassification in deep learning medical HSI models at inference time — core input manipulation at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
BrainMDC
Applications
medical image classificationtumor detectionhyperspectral image classification