attack 2026

Rethinking Transferable Adversarial Attacks on Point Clouds from a Compact Subspace Perspective

Keke Tang 1, Xianheng Liu 1, Weilong Peng 1, Xiaofei Wang 2, Daizong Liu 3, Peican Zhu 4, Can Lu 1, Zhihong Tian 1

0 citations · 44 references · arXiv

α

Published on arXiv

2601.23102

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

CoSA outperforms state-of-the-art transferable adversarial attacks on point clouds across multiple architectures while remaining competitive on imperceptibility and robustness against defenses.

CoSA

Novel technique introduced


Transferable adversarial attacks on point clouds remain challenging, as existing methods often rely on model-specific gradients or heuristics that limit generalization to unseen architectures. In this paper, we rethink adversarial transferability from a compact subspace perspective and propose CoSA, a transferable attack framework that operates within a shared low-dimensional semantic space. Specifically, each point cloud is represented as a compact combination of class-specific prototypes that capture shared semantic structure, while adversarial perturbations are optimized within a low-rank subspace to induce coherent and architecture-agnostic variations. This design suppresses model-dependent noise and constrains perturbations to semantically meaningful directions, thereby improving cross-model transferability without relying on surrogate-specific artifacts. Extensive experiments on multiple datasets and network architectures demonstrate that CoSA consistently outperforms state-of-the-art transferable attacks, while maintaining competitive imperceptibility and robustness under common defense strategies. Codes will be made public upon paper acceptance.


Key Contributions

  • Compact subspace perspective: represents point clouds as combinations of class-specific prototypes capturing shared semantic structure to improve adversarial transferability
  • Low-rank perturbation optimization that suppresses model-specific gradient noise and constrains perturbations to architecture-agnostic, semantically meaningful directions
  • CoSA consistently outperforms state-of-the-art transferable attacks across multiple datasets and network architectures while maintaining imperceptibility under common defenses

🛡️ Threat Analysis

Input Manipulation Attack

Proposes CoSA, a framework for crafting transferable adversarial perturbations on point clouds at inference time, causing misclassification across unseen model architectures — a direct input manipulation attack.


Details

Domains
vision
Model Types
cnntransformergnn
Threat Tags
black_boxinference_timedigital
Datasets
ModelNet40ShapeNet
Applications
3d point cloud classification