Post-hoc explanation of black-box classifiers using confident itemsets

05/05/2020
by   Milad Moradi, et al.
0

It is difficult to trust decisions made by Black-box Artificial Intelligence (AI) methods since their inner working and decision logic is hidden from the user. Explainable Artificial Intelligence (XAI) refers to systems that try to explain how a black-box AI model produces its outcomes. Post-hoc XAI methods approximate the behavior of a black-box by extracting relationships between feature values and the predictions. Some post-hoc explanators randomly perturb data records and build local linear models to explain individual predictions. Other type of explanators use frequent itemsets to extract feature values that frequently appear in samples belonging to a particular class. However, the above methods have some limitations. Random perturbations do not take into account the distribution of feature values in different subspaces, leading to misleading approximations. Frequent itemsets only pay attention to frequently appearing feature values and miss many important correlations between features and class labels that could accurately represent decision boundaries of the model. In this paper, we address the above challenges by proposing an explanation method named Confident Itemsets Explanation (CIE). We introduce confident itemsets, a set of feature values that are highly correlated to a specific class label. CIE utilizes confident itemsets to discretize the whole decision space of a model to smaller subspaces. Extracting important correlations between the features and the outcomes of the black-box in different subspaces, CIE produces instance-wise and class-wise explanations that accurately approximate the behavior of the target black-box classifier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

Explaining black-box text classifiers for disease-treatment information extraction

Deep neural networks and other intricate Artificial Intelligence (AI) mo...
research
02/15/2023

Streamlining models with explanations in the learning loop

Several explainable AI methods allow a Machine Learning user to get insi...
research
06/18/2021

Rational Shapley Values

Explaining the predictions of opaque machine learning algorithms is an i...
research
06/22/2022

Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models

With the rapid development of eXplainable Artificial Intelligence (XAI),...
research
06/27/2022

Thermodynamics of Interpretation

Over the past few years, different types of data-driven Artificial Intel...
research
10/04/2019

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

For AI systems to garner widespread public acceptance, we must develop m...
research
01/04/2022

McXai: Local model-agnostic explanation as two games

To this day, a variety of approaches for providing local interpretabilit...

Please sign up or login with your details

Forgot password? Click here to reset