CLIMAX: An exploration of Classifier-Based Contrastive Explanations

07/02/2023
by   Praharsh Nanavati, et al.
0

Explainable AI is an evolving area that deals with understanding the decision making of machine learning models so that these models are more transparent, accountable, and understandable for humans. In particular, post-hoc model-agnostic interpretable AI techniques explain the decisions of a black-box ML model for a single instance locally, without the knowledge of the intrinsic nature of the ML model. Despite their simplicity and capability in providing valuable insights, existing approaches fail to deliver consistent and reliable explanations. Moreover, in the context of black-box classifiers, existing approaches justify the predicted class, but these methods do not ensure that the explanation scores strongly differ as compared to those of another class. In this work we propose a novel post-hoc model agnostic XAI technique that provides contrastive explanations justifying the classification of a black box classifier along with a reasoning as to why another class was not predicted. Our method, which we refer to as CLIMAX which is short for Contrastive Label-aware Influence-based Model Agnostic XAI, is based on local classifiers . In order to ensure model fidelity of the explainer, we require the perturbations to be such that it leads to a class-balanced surrogate dataset. Towards this, we employ a label-aware surrogate data generation method based on random oversampling and Gaussian Mixture Model sampling. Further, we propose influence subsampling in order to retaining effective samples and hence ensure sample complexity. We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME. We also depict results on textual and image based datasets, where we generate contrastive explanations for any black-box classification model where one is able to only query the class probabilities for an instance of interest.

READ FULL TEXT

page 2

page 7

page 10

research
05/04/2020

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

Systems based on artificial intelligence and machine learning models sho...
research
10/29/2019

bLIMEy: Surrogate Prediction Explanations Beyond LIME

Surrogate explainers of black-box machine learning predictions are of pa...
research
10/01/2021

Consistent Explanations by Contrastive Learning

Understanding and explaining the decisions of neural networks are critic...
research
12/06/2009

How to Explain Individual Classification Decisions

After building a classifier with modern tools of machine learning we typ...
research
11/15/2017

Influential Sample Selection: A Graph Signal Processing Approach

With the growing complexity of machine learning techniques, understandin...
research
11/19/2018

Towards Global Explanations for Credit Risk Scoring

In this paper we propose a method to obtain global explanations for trai...
research
06/15/2019

LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

Technological breakthroughs on smart homes, self-driving cars, health ca...

Please sign up or login with your details

Forgot password? Click here to reset