Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

02/21/2018
by   Amit Dhurandhar, et al.
0

In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily absent (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically absent is an important part of an explanation, which to the best of our knowledge, has not been touched upon by current explanation methods that attempt to explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and an fMRI brain imaging dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.

READ FULL TEXT

page 2

page 10

research
12/27/2020

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Humans give contrastive explanations that explain why an observed event ...
research
05/31/2019

Model Agnostic Contrastive Explanations for Structured Data

Recently, a method [7] was proposed to generate contrastive explanations...
research
06/10/2023

Two-Stage Holistic and Contrastive Explanation of Image Classification

The need to explain the output of a deep neural network classifier is no...
research
09/23/2020

Explaining Chemical Toxicity using Missing Features

Chemical toxicity prediction using machine learning is important in drug...
research
07/19/2023

TbExplain: A Text-based Explanation Method for Scene Classification Models with the Statistical Prediction Correction

The field of Explainable Artificial Intelligence (XAI) aims to improve t...
research
05/17/2023

Explain Any Concept: Segment Anything Meets Concept-Based Explanation

EXplainable AI (XAI) is an essential topic to improve human understandin...
research
05/27/2018

Semantic Explanations of Predictions

The main objective of explanations is to transmit knowledge to humans. T...

Please sign up or login with your details

Forgot password? Click here to reset