Colliding with Adversaries at ECML-PKDD 2025 Adversarial Attack Competition 1st Prize Solution
Dimitris Stefanopoulos 1, Andreas Voskou 2
Published on arXiv
2510.16440
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieved first place in Task 1 of the ECML-PKDD 2025 adversarial attack competition with the best combined fooling ratio and L1 perturbation score
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The task required designing an adversarial attack against a provided classification model that maximizes misclassification while minimizing perturbations. Our approach employs a multi-round gradient-based strategy that leverages the differentiable structure of the model, augmented with random initialization and sample-mixing techniques to enhance effectiveness. The resulting attack achieved the best results in perturbation size and fooling success rate, securing first place in the competition.
Key Contributions
- Dual-objective loss that switches between fooling loss (negative BCE) when prediction has not yet flipped and L1 minimization once a flip is achieved, decoupling the two objectives to reduce variance
- Multi-round optimization with 150 rounds × 20 parallel runs and a decaying random step-size schedule to escape local optima and reduce perturbation size
- Sample-mixing augmentation to further enhance attack effectiveness across the tabular high-energy physics dataset
🛡️ Threat Analysis
Proposes a gradient-based adversarial attack against a tabular neural network classifier, maximizing misclassification while minimizing L1 perturbation — a direct input manipulation attack at inference time using a dual-objective optimization strategy.