Contextual Local Explanation for Black Box Classifiers

10/02/2019
by   Zijian Zhang, et al.
0

We introduce a new model-agnostic explanation technique which explains the prediction of any classifier called CLE. CLE gives an faithful and interpretable explanation to the prediction, by approximating the model locally using an interpretable model. We demonstrate the flexibility of CLE by explaining different models for text, tabular and image classification, and the fidelity of it by doing simulated user experiments.

READ FULL TEXT

page 4

page 5

research
02/18/2020

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

Explainability is a gateway between Artificial Intelligence and society ...
research
02/16/2016

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

Despite widespread adoption, machine learning models remain mostly black...
research
10/05/2018

Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections

We introduce a method, KL-LIME, for explaining predictions of Bayesian p...
research
06/30/2016

A Model Explanation System: Latest Updates and Extensions

We propose a general model explanation system (MES) for "explaining" the...
research
12/14/2020

Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

Explaining decisions of black-box classifiers is paramount in sensitive ...
research
07/26/2023

The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers

Interpretable part-prototype models are computer vision models that are ...
research
12/09/2021

Model Doctor: A Simple Gradient Aggregation Strategy for Diagnosing and Treating CNN Classifiers

Recently, Convolutional Neural Network (CNN) has achieved excellent perf...

Please sign up or login with your details

Forgot password? Click here to reset