benchmark arXiv Oct 2, 2025 · Oct 2025
Camilo Andrés García Trillos, Nicolás García Trillos · University College London · University of Wisconsin Madison
Derives sharp, efficiently computable lower bounds on adversarial risk for multiclass classifiers under cross-entropy and other general losses
Input Manipulation Attack vision
We consider adversarially robust classification in a multiclass setting under arbitrary loss functions and derive dual and barycentric reformulations of the corresponding learner-agnostic robust risk minimization problem. We provide explicit characterizations for important cases such as the cross-entropy loss, loss functions with a power form, and the quadratic loss, extending in this way available results for the 0-1 loss. These reformulations enable efficient computation of sharp lower bounds for adversarial risks and facilitate the design of robust classifiers beyond the 0-1 loss setting. Our paper uncovers interesting connections between adversarial robustness, $α$-fair packing problems, and generalized barycenter problems for arbitrary positive measures where Kullback-Leibler and Tsallis entropies are used as penalties. Our theoretical results are accompanied with illustrative numerical experiments where we obtain tighter lower bounds for adversarial risks with the cross-entropy loss function.
traditional_ml University College London · University of Wisconsin Madison