Verifying DNN-based Semantic Communication Against Generative Adversarial Noise
Thanh Le 1, Hai Duong 2, ThanhVu Nguyen 2, Takeshi Matsumura 1
Published on arXiv
2602.08801
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
VSCAN provides formal robustness guarantees for 44% of 600 adversarial verification properties while matching attack methods in vulnerability discovery, and reveals that compact 16-dimensional latent spaces achieve 50% verified robustness compared to 64-dimensional spaces.
VSCAN
Novel technique introduced
Safety-critical applications like autonomous vehicles and industrial IoT are adopting semantic communication (SemCom) systems using deep neural networks to reduce bandwidth and increase transmission speed by transmitting only task-relevant semantic features. However, adversarial attacks against these DNN-based SemCom systems can cause catastrophic failures by manipulating transmitted semantic features. Existing defense mechanisms rely on empirical approaches provide no formal guarantees against the full spectrum of adversarial perturbations. We present VSCAN, a neural network verification framework that provides mathematical robustness guarantees by formulating adversarial noise generation as mixed integer programming and verifying end-to-end properties across multiple interconnected networks (encoder, decoder, and task model). Our key insight is that realistic adversarial constraints (power limitations and statistical undetectability) can be encoded as logical formulae to enable efficient verification using state-of-the-art DNN verifiers. Our evaluation on 600 verification properties characterizing various attacker's capabilities shows VSCAN matches attack methods in finding vulnerabilities while providing formal robustness guarantees for 44% of properties -- a significant achievement given the complexity of multi-network verification. Moreover, we reveal a fundamental security-efficiency tradeoff: compact 16-dimensional latent spaces achieve 50% verified robustness compared to 64-dimensional spaces.
Key Contributions
- VSCAN framework that formulates adversarial noise generation as mixed integer programming (MIP) to enable formal end-to-end verification across encoder, decoder, and task model networks
- Encoding of realistic adversarial constraints (power limitations, statistical undetectability) as logical formulae for efficient DNN verifier integration
- Empirical demonstration of a security-efficiency tradeoff: 16-dimensional latent spaces achieve 50% verified robustness versus 64-dimensional spaces, across 600 verification properties
🛡️ Threat Analysis
Paper defends against adversarial input perturbations that manipulate transmitted semantic features to cause task-model misclassification; VSCAN provides certified robustness guarantees (formal verification) against the full spectrum of bounded adversarial perturbations at inference time — a direct ML01 defense contribution.