defense 2025

SHLIME: Foiling adversarial attacks fooling SHAP and LIME

Sam Chauhan , Estelle Duguet , Karthik Ramakrishnan , Hugh Van Deventer , Jack Kruger , Ranjan Subbaraman

0 citations

α

Published on arXiv

2508.11053

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Certain LIME-SHAP ensemble configurations substantially improve detection of adversarially concealed model biases compared to individual explanation methods across classifiers of varying F1 scores

SHLIME

Novel technique introduced


Post hoc explanation methods, such as LIME and SHAP, provide interpretable insights into black-box classifiers and are increasingly used to assess model biases and generalizability. However, these methods are vulnerable to adversarial manipulation, potentially concealing harmful biases. Building on the work of Slack et al. (2020), we investigate the susceptibility of LIME and SHAP to biased models and evaluate strategies for improving robustness. We first replicate the original COMPAS experiment to validate prior findings and establish a baseline. We then introduce a modular testing framework enabling systematic evaluation of augmented and ensemble explanation approaches across classifiers of varying performance. Using this framework, we assess multiple LIME/SHAP ensemble configurations on out-of-distribution models, comparing their resistance to bias concealment against the original methods. Our results identify configurations that substantially improve bias detection, highlighting their potential for enhancing transparency in the deployment of high-stakes machine learning systems.


Key Contributions

  • Replication of Slack et al. (2020) COMPAS adversarial LIME/SHAP attack experiment to validate prior findings
  • Modular testing framework for systematic evaluation of augmented and ensemble LIME/SHAP configurations on out-of-distribution classifiers
  • Identification of ensemble LIME-SHAP configurations that substantially improve bias detection and resistance to adversarial bias concealment

🛡️ Threat Analysis

Output Integrity Attack

The adversarial attack causes LIME and SHAP to produce misleading or incorrect explanations (corrupted output integrity) by exploiting OOD-vs-in-distribution behavioral gaps in a deliberately biased model; the paper defends against this by proposing ensemble explanation configurations that restore the integrity of model explanations.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
black_boxinference_timetargeted
Datasets
COMPASCommunities and CrimeGerman Credit
Applications
recidivism risk predictioncriminal justice mlfairness auditing