Enhancing Adversarial Transferability through Block Stretch and Shrink
Quan Liu , Feng Ye , Chenhao Lu , Shuming Zhen , Guanliang Huang , Lunzhe Chen , Xudong Ke
Published on arXiv
2511.17688
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
BSS outperforms existing input transformation-based adversarial attack methods in black-box transferability on a subset of ImageNet.
Block Stretch and Shrink (BSS)
Novel technique introduced
Adversarial attacks introduce small, deliberately crafted perturbations that mislead neural networks, and their transferability from white-box to black-box target models remains a critical research focus. Input transformation-based attacks are a subfield of adversarial attacks that enhance input diversity through input transformations to improve the transferability of adversarial examples. However, existing input transformation-based attacks tend to exhibit limited cross-model transferability. Previous studies have shown that high transferability is associated with diverse attention heatmaps and the preservation of global semantics in transformed inputs. Motivated by this observation, we propose Block Stretch and Shrink (BSS), a method that divides an image into blocks and applies stretch and shrink operations to these blocks, thereby diversifying attention heatmaps in transformed inputs while maintaining their global semantics. Empirical evaluations on a subset of ImageNet demonstrate that BSS outperforms existing input transformation-based attack methods in terms of transferability. Furthermore, we examine the impact of the number scale, defined as the number of transformed inputs, in input transformation-based attacks, and advocate evaluating these methods under a unified number scale to enable fair and comparable assessments.
Key Contributions
- Proposes Block Stretch and Shrink (BSS), an input transformation method that divides images into blocks and applies stretch/shrink operations to diversify attention heatmaps while preserving global semantics, improving adversarial transferability.
- Empirically demonstrates that BSS outperforms existing input transformation-based attacks on a subset of ImageNet in cross-model transferability.
- Introduces and analyzes the 'number scale' variable in input transformation attacks, advocating for a unified scale for fair comparison across methods.
🛡️ Threat Analysis
Directly proposes a new input transformation-based adversarial attack (BSS) that crafts perturbations on a white-box surrogate model and transfers them to fool black-box target models at inference time — classic adversarial evasion attack research.