defense 2025

An Empirical Study of Accuracy-Robustness Tradeoff and Training Efficiency in Self-Supervised Learning

Fatemeh Ghofrani , Pooyan Jamshidi

0 citations

α

Published on arXiv

2501.03507

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

CF-AMC-SSL achieves a superior balance between clean accuracy and adversarial robustness compared to prior robust SSL baselines while reducing the number of required training epochs via free adversarial training.

CF-AMC-SSL

Novel technique introduced


Self-supervised learning (SSL) has significantly advanced image representation learning, yet efficiency challenges persist, particularly with adversarial training. Many SSL methods require extensive epochs to achieve convergence, a demand further amplified in adversarial settings. To address this inefficiency, we revisit the robust EMP-SSL framework, emphasizing the importance of increasing the number of crops per image to accelerate learning. Unlike traditional contrastive learning, robust EMP-SSL leverages multi-crop sampling, integrates an invariance term and regularization, and reduces training epochs, enhancing time efficiency. Evaluated with both standard linear classifiers and multi-patch embedding aggregation, robust EMP-SSL provides new insights into SSL evaluation strategies. Our results show that robust crop-based EMP-SSL not only accelerates convergence but also achieves a superior balance between clean accuracy and adversarial robustness, outperforming multi-crop embedding aggregation. Additionally, we extend this approach with free adversarial training in Multi-Crop SSL, introducing the Cost-Free Adversarial Multi-Crop Self-Supervised Learning (CF-AMC-SSL) method. CF-AMC-SSL demonstrates the effectiveness of free adversarial training in reducing training time while simultaneously improving clean accuracy and adversarial robustness. These findings underscore the potential of CF-AMC-SSL for practical SSL applications. Our code is publicly available at https://github.com/softsys4ai/CF-AMC-SSL.


Key Contributions

  • Revisits the robust EMP-SSL framework showing that increasing the number of crops per image accelerates convergence and improves the accuracy-robustness tradeoff in adversarial SSL
  • Introduces CF-AMC-SSL (Cost-Free Adversarial Multi-Crop Self-Supervised Learning) by applying free adversarial training to multi-crop SSL, reducing training cost while simultaneously improving clean accuracy and adversarial robustness
  • Empirically compares standard linear classifiers vs. multi-patch embedding aggregation as SSL evaluation strategies under adversarial settings

🛡️ Threat Analysis

Input Manipulation Attack

Proposes adversarial training as a defense against adversarial examples in the SSL setting — the core contribution is improving a model's resistance to adversarial input perturbations at inference time while studying the clean accuracy vs. adversarial robustness tradeoff.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timeuntargeted
Datasets
CIFAR-10CIFAR-100STL-10
Applications
image classificationimage representation learning