tool 2026

Orthogonium : A Unified, Efficient Library of Orthogonal and 1-Lipschitz Building Blocks

Thibaut Boissin 1,2,3, Franck Mamalet 1,2, Valentin Lafargue 1,2,4,5, Mathieu Serrurier 3,2

0 citations · 44 references · arXiv

α

Published on arXiv

2601.13776

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Orthogonium reduces computational overhead of orthogonal/1-Lipschitz layers on ImageNet-scale benchmarks while maintaining strict Lipschitz constraints, and uncovered critical correctness bugs in existing implementations.

Orthogonium

Novel technique introduced


Orthogonal and 1-Lipschitz neural network layers are essential building blocks in robust deep learning architectures, crucial for certified adversarial robustness, stable generative models, and reliable recurrent networks. Despite significant advancements, existing implementations remain fragmented, limited, and computationally demanding. To address these issues, we introduce Orthogonium , a unified, efficient, and comprehensive PyTorch library providing orthogonal and 1-Lipschitz layers. Orthogonium provides access to standard convolution features-including support for strides, dilation, grouping, and transposed-while maintaining strict mathematical guarantees. Its optimized implementations reduce overhead on large scale benchmarks such as ImageNet. Moreover, rigorous testing within the library has uncovered critical errors in existing implementations, emphasizing the importance of standardized and reliable tools. Orthogonium thus significantly lowers adoption barriers, enabling scalable experimentation and integration across diverse applications requiring orthogonality and robust Lipschitz constraints. Orthogonium is available at https://github.com/deel-ai/orthogonium.


Key Contributions

  • Unified PyTorch library consolidating fragmented orthogonal and 1-Lipschitz layer implementations into a single, standardized API
  • Optimized implementations supporting strides, dilation, grouping, and transposed convolutions with strict mathematical guarantees
  • Rigorous test suite that uncovered critical errors in existing third-party implementations

🛡️ Threat Analysis

Input Manipulation Attack

1-Lipschitz and orthogonal layers provide tight certified robustness bounds against adversarial perturbations at inference time — the library's core stated use case is enabling certified adversarial robustness defenses.


Details

Domains
visiongenerative
Model Types
cnnrnngan
Threat Tags
inference_time
Datasets
ImageNet
Applications
certified adversarial robustnessgenerative modelsrecurrent networksimage classification