Explaining Deep Convolutional Neural Networks for Image Classification by Evolving Local Interpretable Model-agnostic Explanations

11/28/2022
by   Bin Wang, et al.
0

Deep convolutional neural networks have proven their effectiveness, and have been acknowledged as the most dominant method for image classification. However, a severe drawback of deep convolutional neural networks is poor explainability. Unfortunately, in many real-world applications, users need to understand the rationale behind the predictions of deep convolutional neural networks when determining whether they should trust the predictions or not. To resolve this issue, a novel genetic algorithm-based method is proposed for the first time to automatically evolve local explanations that can assist users to assess the rationality of the predictions. Furthermore, the proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models. In the experiments, ResNet is used as an example model to be explained, and the ImageNet dataset is selected as the benchmark dataset. DenseNet and MobileNet are further explained to demonstrate the model-agnostic characteristic of the proposed method. The evolved local explanations on four images, randomly selected from ImageNet, are presented, which show that the evolved local explanations are straightforward to be recognised by humans. Moreover, the evolved explanations can explain the predictions of deep convolutional neural networks on all four images very well by successfully capturing meaningful interpretable features of the sample images. Further analysis based on the 30 runs of the experiments exhibits that the evolved local explanations can also improve the probabilities/confidences of the deep convolutional neural network models in making the predictions. The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME (the state-of-the-art method).

READ FULL TEXT

page 1

page 3

page 4

page 8

page 9

page 10

research
10/05/2018

Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections

We introduce a method, KL-LIME, for explaining predictions of Bayesian p...
research
05/14/2020

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is...
research
08/05/2021

Automatic Detection of Rail Components via A Deep Convolutional Transformer Network

Automatic detection of rail track and its fasteners via using continuous...
research
12/10/2020

Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks

Class activation maps (CAMs) explain convolutional neural network predic...
research
02/11/2018

Influence-Directed Explanations for Deep Convolutional Networks

We study the problem of explaining a rich class of behavioral properties...
research
04/12/2021

Epigenetic evolution of deep convolutional models

In this study, we build upon a previously proposed neuroevolution framew...
research
10/18/2019

Reflecting After Learning for Understanding

Today, image classification is a common way for systems to process visua...

Please sign up or login with your details

Forgot password? Click here to reset