defense 2025

Understanding and Improving Adversarial Robustness of Neural Probabilistic Circuits

Weixin Chen , Han Zhao

0 citations · 56 references · arXiv

α

Published on arXiv

2509.20549

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

RNPC achieves provably improved adversarial robustness over NPC and outperforms existing concept bottleneck models empirically while maintaining clean accuracy on benign inputs

RNPC (Robust Neural Probabilistic Circuit)

Novel technique introduced


Neural Probabilistic Circuits (NPCs), a new class of concept bottleneck models, comprise an attribute recognition model and a probabilistic circuit for reasoning. By integrating the outputs from these two modules, NPCs produce compositional and interpretable predictions. While offering enhanced interpretability and high performance on downstream tasks, the neural-network-based attribute recognition model remains a black box. This vulnerability allows adversarial attacks to manipulate attribute predictions by introducing carefully crafted subtle perturbations to input images, potentially compromising the final predictions. In this paper, we theoretically analyze the adversarial robustness of NPC and demonstrate that it only depends on the robustness of the attribute recognition model and is independent of the robustness of the probabilistic circuit. Moreover, we propose RNPC, the first robust neural probabilistic circuit against adversarial attacks on the recognition module. RNPC introduces a novel class-wise integration for inference, ensuring a robust combination of outputs from the two modules. Our theoretical analysis demonstrates that RNPC exhibits provably improved adversarial robustness compared to NPC. Empirical results on image classification tasks show that RNPC achieves superior adversarial robustness compared to existing concept bottleneck models while maintaining high accuracy on benign inputs.


Key Contributions

  • Theoretical analysis showing NPC adversarial robustness depends solely on the attribute recognition module and is independent of the probabilistic circuit component
  • RNPC: a novel class-wise integration mechanism for inference that provably improves adversarial robustness over standard NPCs
  • Empirical demonstration that RNPC outperforms existing concept bottleneck models in adversarial robustness while preserving benign accuracy

🛡️ Threat Analysis

Input Manipulation Attack

The threat model is adversarial perturbations on input images at inference time that manipulate attribute predictions in Neural Probabilistic Circuits. The paper proposes RNPC as a defense with provable adversarial robustness guarantees — a direct defense against input manipulation attacks.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timedigital
Applications
image classificationconcept bottleneck models