attack arXiv Oct 18, 2025 · Oct 2025
Dimitris Stefanopoulos, Andreas Voskou · Aristotle University of Thessaloniki · Cyprus University of Technology
Wins adversarial attack competition with dual-objective gradient descent that switches between fooling loss and L1 minimization across 150 rounds
Input Manipulation Attack tabular
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery at ECML-PKDD 2025. The task required designing an adversarial attack against a provided classification model that maximizes misclassification while minimizing perturbations. Our approach employs a multi-round gradient-based strategy that leverages the differentiable structure of the model, augmented with random initialization and sample-mixing techniques to enhance effectiveness. The resulting attack achieved the best results in perturbation size and fooling success rate, securing first place in the competition.
traditional_ml Aristotle University of Thessaloniki · Cyprus University of Technology