Time-Efficient Evaluation and Enhancement of Adversarial Robustness in Deep Neural Networks
Published on arXiv
2512.20893
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Proposes computationally efficient red-blue adversarial robustness evaluation and enhancement methods to address scalability limitations of existing approaches for large-scale DNNs.
With deep neural networks (DNNs) increasingly embedded in modern society, ensuring their safety has become a critical and urgent issue. In response, substantial efforts have been dedicated to the red-blue adversarial framework, where the red team focuses on identifying vulnerabilities in DNNs and the blue team on mitigating them. However, existing approaches from both teams remain computationally intensive, constraining their applicability to large-scale models. To overcome this limitation, this thesis endeavours to provide time-efficient methods for the evaluation and enhancement of adversarial robustness in DNNs.
Key Contributions
- Time-efficient methods for evaluating adversarial robustness of DNNs (red team)
- Time-efficient methods for enhancing adversarial robustness of DNNs (blue team)
- Scalable adversarial robustness framework applicable to large-scale models
🛡️ Threat Analysis
The paper addresses adversarial vulnerabilities in DNNs from both the attack side (red team evaluation — finding adversarial examples) and the defense side (blue team enhancement — mitigating those vulnerabilities), which is the core scope of ML01: Input Manipulation Attacks and their defenses.