RESQ: A Unified Framework for REliability- and Security Enhancement of Quantized Deep Neural Networks
Ali Soltan Mohammadi 1, Samira Nazari 1, Ali Azarpeyvand 1, Mahdi Taheri 2,3, Milos Krstic 4, Michael Huebner 2, Christian Herglotz 2, Tara Ghasempouri 3
Published on arXiv
2603.15413
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves up to 10.35% improvement in attack resilience and 12.47% in fault resilience on quantized networks while maintaining competitive accuracy across multiple architectures
RESQ
Novel technique introduced
This work proposes a unified three-stage framework that produces a quantized DNN with balanced fault and attack robustness. The first stage improves attack resilience via fine-tuning that desensitizes feature representations to small input perturbations. The second stage reinforces fault resilience through fault-aware fine-tuning under simulated bit-flip faults. Finally, a lightweight post-training adjustment integrates quantization to enhance efficiency and further mitigate fault sensitivity without degrading attack resilience. Experiments on ResNet18, VGG16, EfficientNet, and Swin-Tiny in CIFAR-10, CIFAR-100, and GTSRB show consistent gains of up to 10.35% in attack resilience and 12.47% in fault resilience, while maintaining competitive accuracy in quantized networks. The results also highlight an asymmetric interaction in which improvements in fault resilience generally increase resilience to adversarial attacks, whereas enhanced adversarial resilience does not necessarily lead to higher fault resilience.
Key Contributions
- Three-stage unified framework (RESQ) that sequentially enhances both adversarial and fault resilience in quantized DNNs
- Demonstrates asymmetric interaction where fault resilience improvements increase adversarial robustness, but not vice versa
- Validates approach across CNNs (ResNet18, VGG16, EfficientNet) and Vision Transformers (Swin-Tiny) with up to 10.35% attack resilience and 12.47% fault resilience gains
🛡️ Threat Analysis
Paper defends against adversarial input perturbations (FGSM, MIM attacks) through adversarial fine-tuning that desensitizes feature representations to input perturbations. Achieves up to 10.35% improvement in attack resilience.