defense 2025

CODED-SMOOTHING: Coding Theory Helps Generalization

Parsa Moradi 1, Tayyebeh Jahaninezhad 2, Mohammad Ali Maddah-Ali 1

0 citations · 49 references · arXiv

α

Published on arXiv

2510.00253

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Coded-smoothing achieves state-of-the-art robustness against gradient-based adversarial attacks while simultaneously improving generalization on supervised and unsupervised tasks.

Coded-Smoothing

Novel technique introduced


We introduce the coded-smoothing module, which can be seamlessly integrated into standard training pipelines, both supervised and unsupervised, to regularize learning and improve generalization with minimal computational overhead. In addition, it can be incorporated into the inference pipeline to randomize the model and enhance robustness against adversarial perturbations. The design of coded-smoothing is inspired by general coded computing, a paradigm originally developed to mitigate straggler and adversarial failures in distributed computing by processing linear combinations of the data rather than the raw inputs. Building on this principle, we adapt coded computing to machine learning by designing an efficient and effective regularization mechanism that encourages smoother representations and more generalizable solutions. Extensive experiments on both supervised and unsupervised tasks demonstrate that coded-smoothing consistently improves generalization and achieves state-of-the-art robustness against gradient-based adversarial attacks.


Key Contributions

  • Coded-smoothing module that generates linear combinations of batch inputs (coded samples) and uses a decoding step to reconstruct outputs, inducing local smoothness and reducing model complexity
  • Auxiliary training penalty that encourages decoded outputs to match true targets, regularizing models toward smoother and more generalizable solutions
  • Inference-time deployment mode that randomizes the model via coded inputs to achieve state-of-the-art robustness against gradient-based adversarial attacks

🛡️ Threat Analysis

Input Manipulation Attack

The coded-smoothing module is explicitly designed to enhance robustness against gradient-based adversarial perturbations at inference time by randomizing the model via linear combinations of inputs — a direct defense against adversarial example attacks. The paper claims SOTA adversarial robustness and specifically frames inference-time deployment as an adversarial defense mechanism, not merely an incidental evaluation.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timedigital
Applications
image classification