attack 2026

Shapes are not enough: CONSERVAttack and its use for finding vulnerabilities and uncertainties in machine learning applications

Philip Bechtle 1, Lucie Flek , Philipp Alexander Jung 2, Akbar Karimi , Timo Saala 1, Alexander Schmidt 2, Matthias Schott 1, Philipp Soldin 2, Christopher Wiebusch 2, Ulrich Willemsen 2

0 citations

α

Published on arXiv

2603.13970

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarial perturbations successfully fool ML models while remaining within statistical uncertainty bounds of marginal distributions and correlations, demonstrating a previously unexplored source of systematic uncertainty in HEP analyses

CONSERVAttack

Novel technique introduced


In High Energy Physics, as in many other fields of science, the application of machine learning techniques has been crucial in advancing our understanding of fundamental phenomena. Increasingly, deep learning models are applied to analyze both simulated and experimental data. In most experiments, a rigorous regime of testing for physically motivated systematic uncertainties is in place. The numerical evaluation of these tests for differences between the data on the one side and simulations on the other side quantifies the effect of potential sources of mismodelling on the machine learning output. In addition, thorough comparisons of marginal distributions and (linear) feature correlations between data and simulation in "control regions" are applied. However, the guidance by physical motivation, and the need to constrain comparisons to specific regions, does not guarantee that all possible sources of deviations have been accounted for. We therefore propose a new adversarial attack - the CONSERVAttack - designed to exploit the remaining space of hypothetical deviations between simulation and data after the above mentioned tests. The resulting adversarial perturbations are consistent within the uncertainty bounds - evading standard validation checks - while successfully fooling the underlying model. We further propose strategies to mitigate such vulnerabilities and argue that robustness to adversarial effects must be considered when interpreting results from deep learning in particle physics.


Key Contributions

  • Proposes CONSERVAttack, an adversarial attack that exploits high-dimensional correlations while preserving marginal distributions and linear correlations to evade standard validation checks
  • Demonstrates that traditional physics validation procedures (marginal distributions, pairwise correlations) are insufficient to detect adversarial vulnerabilities in ML models
  • Provides a workflow for estimating upper bounds on model susceptibility to adversarial perturbations in scientific applications

🛡️ Threat Analysis

Input Manipulation Attack

CONSERVAttack is an adversarial perturbation attack designed to fool ML classifiers at inference time by crafting inputs that remain within uncertainty bounds of marginal distributions and correlations, but exploit high-dimensional decision boundaries to cause misclassification.


Details

Domains
tabular
Model Types
traditional_mlcnn
Threat Tags
inference_timetargeteddigitalwhite_box
Applications
particle physicshigh energy physics event classification