benchmark 2026

Solving adversarial examples requires solving exponential misalignment

Alessandro Salvatore , Stanislav Fort , Surya Ganguli

0 citations

α

Published on arXiv

2603.03507

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Neural network perceptual manifolds occupy ~3060 of 3072 dimensions on CIFAR-10 versus ~20 dimensions for human concepts; PM dimension negatively predicts robust accuracy across all 18 tested networks, and even the most robust models remain exponentially misaligned.

Perceptual Manifold (PM) dimensionality analysis

Novel technique introduced


Adversarial attacks - input perturbations imperceptible to humans that fool neural networks - remain both a persistent failure mode in machine learning, and a phenomenon with mysterious origins. To shed light, we define and analyze a network's perceptual manifold (PM) for a class concept as the space of all inputs confidently assigned to that class by the network. We find, strikingly, that the dimensionalities of neural network PMs are orders of magnitude higher than those of natural human concepts. Since volume typically grows exponentially with dimension, this suggests exponential misalignment between machines and humans, with exponentially many inputs confidently assigned to concepts by machines but not humans. Furthermore, this provides a natural geometric hypothesis for the origin of adversarial examples: because a network's PM fills such a large region of input space, any input will be very close to any class concept's PM. Our hypothesis thus suggests that adversarial robustness cannot be attained without dimensional alignment of machine and human PMs, and therefore makes strong predictions: both robust accuracy and distance to any PM should be negatively correlated with the PM dimension. We confirmed these predictions across 18 different networks of varying robust accuracy. Crucially, we find even the most robust networks are still exponentially misaligned, and only the few PMs whose dimensionality approaches that of human concepts exhibit alignment to human perception. Our results connect the fields of alignment and adversarial examples, and suggest the curse of high dimensionality of machine PMs is a major impediment to adversarial robustness.


Key Contributions

  • Defines 'perceptual manifolds' (PM) as the set of all inputs confidently assigned to a class by a network, and measures their effective dimensionality across 18 networks
  • Identifies 'exponential misalignment': neural network PMs occupy ~3060/3072 dimensions on CIFAR-10 vs. ~20 dimensions for human visual concepts, predicting adversarially many spurious inputs
  • Shows PM dimensionality is strongly negatively correlated with robust accuracy and distance-to-PM, connecting AI alignment and adversarial robustness as the same underlying problem

🛡️ Threat Analysis

Input Manipulation Attack

The paper's central contribution is a geometric explanation for adversarial vulnerability — perceptual manifold (PM) dimensionality orders of magnitude higher than human concepts causes any input to lie near any class's PM, making adversarial attacks inevitable. Validated across 18 networks of varying robust accuracy from RobustBench.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_timedigital
Datasets
CIFAR-10ImageNetRobustBench
Applications
image classification