eXIAA: eXplainable Injections for Adversarial Attack
Leonardo Pesce 1, Jiawen Wei 1, Gianmarco Mengaldo 1,2
Published on arXiv
2511.10088
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Single-step black-box perturbations produce dramatically different XAI explanations (saliency maps, integrated gradients, DeepLIFT SHAP) on ResNet-18 and ViT-B16 while preserving predicted class probabilities and remaining imperceptible by SSIM.
eXIAA
Novel technique introduced
Post-hoc explainability methods are a subset of Machine Learning (ML) that aim to provide a reason for why a model behaves in a certain way. In this paper, we show a new black-box model-agnostic adversarial attack for post-hoc explainable Artificial Intelligence (XAI), particularly in the image domain. The goal of the attack is to modify the original explanations while being undetected by the human eye and maintain the same predicted class. In contrast to previous methods, we do not require any access to the model or its weights, but only to the model's computed predictions and explanations. Additionally, the attack is accomplished in a single step while significantly changing the provided explanations, as demonstrated by empirical evaluation. The low requirements of our method expose a critical vulnerability in current explainability methods, raising concerns about their reliability in safety-critical applications. We systematically generate attacks based on the explanations generated by post-hoc explainability methods (saliency maps, integrated gradients, and DeepLIFT SHAP) for pretrained ResNet-18 and ViT-B16 on ImageNet. The results show that our attacks could lead to dramatically different explanations without changing the predictive probabilities. We validate the effectiveness of our attack, compute the induced change based on the explanation with mean absolute difference, and verify the closeness of the original image and the corrupted one with the Structural Similarity Index Measure (SSIM).
Key Contributions
- Novel black-box, model-agnostic, single-step adversarial attack (eXIAA) that corrupts XAI explanation outputs without accessing model architecture or weights — requiring only model predictions and explanations
- Systematic evaluation across three post-hoc explainability methods (saliency maps, integrated gradients, DeepLIFT SHAP) on ResNet-18 and ViT-B16 with ImageNet
- Demonstrates that dramatically different explanations can be induced without altering predictive probabilities, quantified via mean absolute difference and SSIM
🛡️ Threat Analysis
Core contribution is crafting adversarial perturbations on input images (black-box, model-agnostic, single-step) that corrupt post-hoc explanations (saliency maps, integrated gradients, DeepLIFT SHAP) while keeping the predicted class unchanged — this is fundamentally an input manipulation attack targeting the explanation output rather than the class label.