Explainable Image Classification with Evidence Counterfactual

04/16/2020
by   Tom Vermeire, et al.
0

The complexity of state-of-the-art modeling techniques for image classification impedes the ability to explain model predictions in an interpretable way. Existing explanation methods generally create importance rankings in terms of pixels or pixel groups. However, the resulting explanations lack an optimal size, do not consider feature dependence and are only related to one class. Counterfactual explanation methods are considered promising to explain complex model decisions, since they are associated with a high degree of human interpretability. In this paper, SEDC is introduced as a model-agnostic instance-level explanation method for image classification to obtain visual counterfactual explanations. For a given image, SEDC searches a small set of segments that, in case of removal, alters the classification. As image classification tasks are typically multiclass problems, SEDC-T is proposed as an alternative method that allows specifying a target counterfactual class. We compare SEDC(-T) with popular feature importance methods such as LRP, LIME and SHAP, and we describe how the mentioned importance ranking issues are addressed. Moreover, concrete examples and experiments illustrate the potential of our approach (1) to obtain trust and insight, and (2) to obtain input for model improvement by explaining misclassifications.

READ FULL TEXT

page 5

page 8

page 9

page 10

page 12

page 14

page 15

page 16

research
01/21/2020

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

Lack of understanding of the decisions made by model-based AI systems is...
research
06/28/2021

Contrastive Counterfactual Visual Explanations With Overdetermination

A novel explainable AI method called CLEAR Image is introduced in this p...
research
01/21/2023

Counterfactual Explanation and Instance-Generation using Cycle-Consistent Generative Adversarial Networks

The image-based diagnosis is now a vital aspect of modern automation ass...
research
11/13/2020

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...
research
04/13/2021

Fast Hierarchical Games for Image Explanations

As modern complex neural networks keep breaking records and solving hard...
research
09/02/2021

Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study

Existing interpretation algorithms have found that, even deep models mak...

Please sign up or login with your details

Forgot password? Click here to reset