Explainable Image Classification with Evidence Counterfactual

04/16/2020
by   Tom Vermeire, et al.
0

The complexity of state-of-the-art modeling techniques for image classification impedes the ability to explain model predictions in an interpretable way. Existing explanation methods generally create importance rankings in terms of pixels or pixel groups. However, the resulting explanations lack an optimal size, do not consider feature dependence and are only related to one class. Counterfactual explanation methods are considered promising to explain complex model decisions, since they are associated with a high degree of human interpretability. In this paper, SEDC is introduced as a model-agnostic instance-level explanation method for image classification to obtain visual counterfactual explanations. For a given image, SEDC searches a small set of segments that, in case of removal, alters the classification. As image classification tasks are typically multiclass problems, SEDC-T is proposed as an alternative method that allows specifying a target counterfactual class. We compare SEDC(-T) with popular feature importance methods such as LRP, LIME and SHAP, and we describe how the mentioned importance ranking issues are addressed. Moreover, concrete examples and experiments illustrate the potential of our approach (1) to obtain trust and insight, and (2) to obtain input for model improvement by explaining misclassifications.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 8

page 9

page 10

page 12

page 14

page 15

page 16

01/21/2020

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

Lack of understanding of the decisions made by model-based AI systems is...
06/28/2021

Contrastive Counterfactual Visual Explanations With Overdetermination

A novel explainable AI method called CLEAR Image is introduced in this p...
11/13/2020

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...
09/02/2021

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

Existing interpretation algorithms have found that, even deep models mak...
04/13/2021

Fast Hierarchical Games for Image Explanations

As modern complex neural networks keep breaking records and solving hard...
05/31/2021

Bounded logit attention: Learning to explain image classifiers

Explainable artificial intelligence is the attempt to elucidate the work...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.