An Extension of LIME with Improvement of Interpretability and Fidelity

04/26/2020 ∙ by Sheng Shi, et al. ∙ Lenovo 0

While deep learning makes significant achievements in Artificial Intelligence (AI), the lack of transparency has limited its broad application in various vertical domains. Explainability is not only a gateway between AI and real world, but also a powerful feature to detect flaw of the models and bias of the data. Local Interpretable Model-agnostic Explanation (LIME) is a widely-accepted technique that explains the prediction of any classifier faithfully by learning an interpretable model locally around the predicted instance. As an extension of LIME, this paper proposes an high-interpretability and high-fidelity local explanation method, known as Local Explanation using feature Dependency Sampling and Nonlinear Approximation (LEDSNA). Given an instance being explained, LEDSNA enhances interpretability by feature sampling with intrinsic dependency. Besides, LEDSNA improves the local explanation fidelity by approximating nonlinear boundary of local decision. We evaluate our method with classification tasks in both image domain and text domain. Experiments show that LEDSNA's explanation of the back-box model achieves much better performance than original LIME in terms of interpretability and fidelity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, people have witnessed the fast development of Artificial Intelligence (AI) [1, 4, 8]

. Compared to traditional machine learning methods, deep learning has achieved superior performance in many challenging tasks. There has been an increasing interest in leveraging deep learning methods to aid decision makers in critical domains such as healthcare and criminal justice. However, because of the nested complicated structure, deep learning models remain mostly black boxes, which are extremely weak in explaining the reasoning process and prediction results. This makes it challenging for decision makers to understand and trust their functionality. Therefore, the explainability and transparency of deep learning models are essential to ensure their broad applications in various vertical domains.

Recently, the development of techniques on explainability and transparency of deep learning models has recently received much attention in the research community [6, 11, 15]. Among them, the post-hoc techniques for explaining black-box models in a human-understandable manner have received much attention [10, 3, 9]

, which generate perturbed samples of a given instance in the feature space and observe the effect of these perturbed samples on the output of the black-box classifier. Due to the generality, these techniques have been used to explain neural networks and complex ensemble models in various domains ranging from medicine, law and finance

[16] [12]. The most representative system in this category is LIME[10]

. As LIME assumes the local area of the classification boundary near the input instance is linear, it uses a linear regression model which is self-explanatory to locally represent the decision and pinpoint important features based on the regression coefficients. It is found relevant works

[5, 13, 2]

proposed to use other models such as decision tree to approximate the target detection boundaries.

There are two drawbacks in current existing local explanations such as LIME. Perturbed samples are generated from a uniform distribution, ignoring the intrinsic correlation between features. This may lead to lose much useful information to learn the local explanation models. Proper sampling operation is especially essential in natural language processing and image recognition. Moreover, most existing methods assume the decision boundary is local linearity, which may produce serious errors as in most complex networks, the local decision boundary is non-linear.

In this paper, we design and develop a novel, high-interpretability and high-fidelity local explanation method to address the above challenges. First, we design a unique local sampling process which incorporate the feature clustering method to handle the feature dependency problems. Then, we adopt Support Vector Regression (SVR) with a kernel function to approximate locally nonlinear boundary. In this way, by simultaneously preserving feature dependency and local non-linearity, our method produces high-interpretability and high-fidelity explanation. For convenience, we refer to our method as LEDSNA “Local Explanation using feature Dependency Sampling and Nonlinear Approximation”.

Ii Method

In this section, we first introduce the two core characteristics of the local explanation method: interpretability and fidelity. Then we introduce the feature sampling with intrinsic dependency and nonlinear boundary of local decision. Finally, we present the framework of LEDSNA algorithm.

An explainable model with good interpretability should be faithful to the original model, understandable to the observer, and graspable in a short time so that the end-user can make wise decisions. Local explanation method learns a model from a set of data samples which is sampled around the instance being explained. The dissimilarity between the true label and predicted label is defined as the loss function

which is a measure of how unfaithful is in approximating . In order to ensure both local fidelity and understandability, we add regularization term to loss function:

(1)

The regularisation term is a measure of complexity of the explainable model . The smaller the regularisation term is, the better the sparsity of model , which leads to better understandability. This is the general framework of LIME [10].

Ii-a Feature Sampling with Intrinsic Dependency

In current existing local explanations, the original sampling procedure is made on each feature independently, ignoring the intrinsic correlation between features. Proper sampling operation is essential as the independent sampling process may lead to lose much useful information to learn the local explanation models. In some cases, when most uniformly generated samples are unrealistic about the actual distribution, false information contributors lead to poorly fitting of the local explanation model. In this section, we design an unique local sampling process which incorporate the feature clustering method to activate a subset of features for better local exploration.

Ii-A1 Feature Dependency Sampling for Image

Proper sampling operation is especially essential in natural image recognition because the visual features of natural objects exhibit a strong correlation in the spacial neighborhood. For image classification, we adopt a superpixel based interpretable representation. Each superpixel segment is the primary processing unit, which is a group of connected pixels with similar colors or gray levels. We denote be the original representation of an image, and binary vector be its interpretable representation, which indicating the presence or absence of a superpixel segment. There is the number of pixels and is the number of superpixel. For the images, especially natural images, superpixel segments often correspond to the coherent regions of visual objects, showing strong correlation in a spacial neighborhood. In order to learn the local behavior of image classifier , we generate a group of perturbed samples of a given instance, , by activating a subset of superpixels in . Firstly, we convert the superpixel segments into an undirected graph. the superpixel segments are represented as vertices of a graph whose edges connect to only those adjacent segments. Considering a graph , where and are the sets of vertices and undirected edges, with cardinalities and , a subset of can be represented by a binary vector , where indicates that vertice is in the subset. The perturbed sampling operation is formalized as finding the clique (), where every two vertices are adjacent. We use the Depth-First Search (DFS) method to get the clique . Some samples in the clique are shown in Fig. 2. Since there is a strong correlation between the adjacent superpixel image segments, the clique set construction can take into full account the various types of neighborhood correlation.

(a) (b) (c)
Fig. 1: (a) Pixel-based image; (b) Superpixel image; (c) Constructing a graph of all superpixel blocks
Fig. 2: Some samples in the clique , where every two vertices are adjacent (marked green)

Ii-A2 Feature Dependency Sampling for Text

It is also essential for natural language processing to have a proper sampling operation. For text classification, we let the interpretable representation be a bag of words. Similar to image, denotes the original representation of a text, and binary vector denotes its interpretable representation. In order to learn the local behavior of text classifier, we generate a group of perturbed samples of a given instance by activating a subset of features. Fig. 5 shows two natural language in Chinese and English, we can find there are strong semantic dependency between words especially in Chinese. If the activated features are get by using a sampling process where features are independent to each other, we may loss much useful information to learn the local explanation models. In sampling process, the semantic dependent words correspond to adjacent superpixels in the image. Semantic dependent words should be selected or unselected at the same time. There are many methods to analyze semantic dependency of natural language. There, we incorporate the Stanford CoreNLP [7] tools into sampling process to get the perturbed samples.

(a) (b)
Fig. 3: Semantic dependency of Chinese natural language and English natural language

Ii-B Nonlinear Boundary of Local Decision

Most existing local explanation methods assume the decision boundary is local linearity. Those explanation methods may produce serious errors as in most complex networks, the local decision boundary is non-linear. Experiments show a simple linear approximation will significantly degrade the explanation fidelity. In this section, we adopt Support Vector Regressor (SVR) with kernel function to approximate nonlinear boundary. In approximation processing, when data are not distributed linearly in the current feature space, we use kernel function to project data points into higher dimensional feature space and find the optimal hyperplane.

The perturbed samples of a given instance are impossible to be fitted by a linear model. Our way to tackle this problem is to apply a kernel function mapping to bring data to a higher dimensional feature space. The formula to transfrom the data is as follow:

(2)

After project data point into higher dimensional feature space. We search for a hyperplane by using hinge error measure. Specifically, we introduce slack variables for data points that violate insensitive error:

(3)
Fig. 4: Two slack variables are required to measure the distance between points and tube

For each data point , two slack variables, are required to measure whether is above or below the tube.

(4)
(5)

The learning is by the optimization:

(6)

This is the famous support vector regression method which can be solved by building Lagrangian functions.

Algorithm 1 shows a simplified workflow diagram of LEDSNA. Firstly, LEDSNA incorporates the feature clustering method into sampling process to activate a subset of features. Then, LEDSNA uses kernel function to project data points into higher dimensional feature space. Finally, LEDSNA use the support vector regression to search for a hyperplane and get the coefficient of important feature.

0:  Classifier , Instance ,
1:  get interpretable presentation of (e.g. superpixel image for image and bag of word for text)
2:  get by classifier
3:  incorporate the feature clustering method into sampling process to activate a subset of features
4:  initial
5:  for  do
6:      get by recovering
7:      
8:  end for
9:  use kernel function to project data points into higher dimensional feature space: ;
10:  use the support vector regression to search for a hyperplane
11:  return  feature coefficient
Algorithm 1 Local Explanation using feature Dependency Sampling and Nonlinear Approximation (LEDSNA)

Iii Experiments

In this section, we first introduce the evaluation criterion of explanation methods. Then, we perform experiments in natural language processing in Chinese. Finally, we perform experiment to explain the Google’s pre-trained Inception neural network [14]

on imagenet database. Experiment results show the flexibility of LEDSNA.

Iii-a Evaluation criterion

A good explainable model requires same characteristics. One of the essential criterion is interpretability. The explanation must appear as a certain form understandable to the observer, i.e., providing visual explanations which lists most significant features contributed to the prediction.

Another essential criterion is local fidelity. The explanation must be faithful to the model in the vicinity of the instance being predicted. Local Approximation Error () and R-squared () are two important measurements of the accuracy of our local approximation with respect to the original decision boundary. Local Approximation Error can reflect the prediction accuracy:

(7)

where is a single prediction obtained from a target deep learning classifier, is the predicted value by explanation model.

is the “percent of variance explained” by the explanation model. That is to say that

is the fraction by which the variance of the errors is less than the variance of the dependent variable. is calculated by Total Sum of Squares () and Error Sum of Squares ():

(8)

where is the label of perturbed sample , obtained from a target deep learning classifier. is the predicted value and is the mean value of . Moreover, can be expressed by Mean Square Error () and Variance () which are familiar to us:

(9)

is a relative measure which is conveniently scaled between 0 and 1. The best is . The closer the score is to , the better the performance of fidelity is to explainer.

Iii-B Experiment on Image Classifiers

In this section, LEDSNA and LIME explain image classification predictions made by Google’s pre-trained Inception neural network [14]. Fig. 5 shows two original image to be processed. Fig. 6 and Fig. 7 lists some visual explanations of LEDSNA and LIME: the first row shows the superpixels explanations by LIME (K=1,2,3,4) respectively, the second row shows the superpixels explanations by LEDSNA (K=1,2,3,4) respectively. The explanations highlight the top K superpixel segments, which have the most considerable positive weights towards the predictions. We can see LEDSNA can effectively get the correlation between the adjacent superpixel segments, which provide a better understanding to users.

Fig. 5: Original images and superpixel images
(a) LIME (K=1) (b) LIME (K=2) (c) LIME (K=3) (d) LIME (K=4) (a) LEGKC (K=1) (b) LEGKC (K=2) (c) LEGKC (K=3) (d) LEGKC (K=4)
Fig. 6: Explaining image classification predictions made by Google’s Inception neural network. The first row shows the superpixels explanations by LIME. The second row shows the superpixels explanations by LEGKC.
(a) LIME (K=1) (b) LIME (K=2) (c) LIME (K=3) (d) LIME (K=4) (a) LEGKC (K=1) (b) LEGKC (K=2) (c) LEGKC (K=3) (d) LEGKC (K=4)
Fig. 7: Explaining image classification predictions made by Google’s Inception neural network. The first row shows the superpixels explanations by LIME. The second row shows the superpixels explanations by LEGKC.

In addition, Table I lists some instances of the local approximation error and of two algorithm. Comparing to LIME, we can see LEDSNA provides better predictive accuracy than LIME. Besides, of LEDSNA is much bigger than LIME. By comparing the two criterion, we conclude that LEDSNA has better fidelity than LIME. Compared with LIME in term of interpretability and fidelity, LEDSNA has better performance in explaining classification.

f(x) g(x)
LIME 0.8129 0.2053 0.4662
LEDSNA 0.6066 0.001 0.9803
LIME 0.9857 0.2211 0.3219
LEDSNA 0.7633 0.0012 0.896
LIME 0.5133 0.2248 0.4644
LEDSNA 0.288 0.0005 0.5890
LIME 1.5995 0.6194 0.5939
LEDSNA 0.9025 0.0010 0.8407
LIME 1.2854 0.2655 0.3602
LEDSNA 0.945 0.0010 0.7955
LIME 1.2422 0.2753 0.6341
LEDSNA 0.9657 0.0012 0.8414
TABLE I: Comparison of LIME and LEDSNA in the task of image classification

Iii-C Experiment on Sentiment Analysis of Text

Iii-C1 Experiment on Chinese Natural Language Databse

Simplified Chinese Text Processing (SnowNLP) is a sentiment analysis tool especially for Chinese natural language. This section we use LEDSNA and LIME to explain the predictions made by SnowNLP on Public Comment Dataset. As there is a strong semantic dependency between words in Chinese, we incorporate the Stanford Word Segmenter

[7] into sampling process to get the perturbed samples. In nonlinear approximating, we use Gaussian kernel function to compute the similarity between the data points in a much higher dimensional space.

(a) LEGKC (b) LIME
Fig. 8: Sentiment analysis (p=0.9843)
(a) LEGKC (b) LIME
Fig. 9: Sentiment analysis (p=0.022)

Fig. 8 and Fig. 9 shows visual explanations of LEDNSA and LIME, we can see the explanation of LEDNSA can offer more useful information than that of LIME. Table II lists the local approximation error and of six instances. Comparing to LIME, we find LEDSNA achieves better performance across the board, and by average a magnitude of local approximation error than LIME. For , similar observation is obtained.

f(x) g(x)
LIME 0.1788 0.1755 0.4795 0.0033
LEDSNA 0.1765 0.9973 0.0023
LIME 0.1224 0.1136 0.4969 0.0088
LEDSNA 0.1209 0.8710 0.0016
LIME 0.2298 0.3082 0.4823 0.0784
LEDSNA 0.2283 0.9790 0.0015
LIME 0.4839 0.3526 0.5876 0.1313
LEDSNA 0.4756 0.9822 0.0083
LIME 0.6489 0.6901 0.4449 0.0419
LEDSNA 0.6482 0.9473 0.0008
LIME 0.9052 0.8717 0.5779 0.0335
LEDSNA 0.9050 0.9533 0.0001
TABLE II: Comparison of LIME and LEDSNA in the task of text classification
(a) (b)
Fig. 10: The of test cases: (a) The number of LEDSNA’s which is bigger than LIME. (b) The proportion of LEDSNA’s which is bigger than LIME.
(a) (b)
Fig. 11: The Err of test cases: (a) The number of LEDSNA’s Err which is smaller than LIME. (b) The proportion of LEDSNA’s Err which is smaller than LIME.

Moreover, we randomly selected 1000 data samples to constitute testing database. For each testing data sample, we use LEDSNA and LIME to explain SnowNLP and compute the and . Results show for LEDSNA, the Err of of test data samples are smaller than LIME. Similarly, the of of test data samples are bigger than LIME. In conclusion, LEDSNA exhibits strong interpretability and fidelity over LIME

Iv Conclusion

There are two drawbacks in current existing local explanations. Perturbed samples are generated from a uniform distribution, ignoring the complicated correlation between features. This may lead to lose much useful information to learn the local explanation models. Moreover, most existing methods assume the decision boundary is local linearity, which may produce serious errors as in most complex networks, the local decision boundary is non-linear.

In this paper, we design and develop a novel, high-fidelity local explanation method to address the above challenges. First, we design a unique local sampling process which incorporate the feature clustering method to handle the feature dependency problems. Then, we adopt SVR to approximate locally nonlinear boundary. In this way, by simultaneously preserving feature dependency and local non-linearity, our method produces high-fidelity and high-interpretability explanation.

References

  • [1] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT Press. Note: http://www.deeplearningbook.org Cited by: §I.
  • [2] W. Guo, D. Mu, J. Xu, P. Su, G. Wang, and X. Xing (2018) LEMNA: explaining deep learning based security applications. See DBLP:conf/ccs/2018, pp. 364–379. External Links: Link, Document Cited by: §I.
  • [3] I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett (Eds.) (2017) Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, 4-9 december 2017, long beach, ca, USA. Cited by: §I.
  • [4] T. Hastie, R. Tibshirani, and J. Friedman (2009) The elements of statistical learning. Springer. Note: www.web.stanford.edu/~hastie/ElemStatLearn Cited by: §I.
  • [5] H. Lakkaraju, S. H. Bach, and J. Leskovec (2016) Interpretable decision sets: A joint framework for description and prediction. See DBLP:conf/kdd/2016, pp. 1675–1684. External Links: Link, Document Cited by: §I.
  • [6] Y. Lou, R. Caruana, and J. Gehrke (2012) Intelligible models for classification and regression. See DBLP:conf/kdd/2012, pp. 150–158. External Links: Link, Document Cited by: §I.
  • [7] C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky (2014) The stanford corenlp natural language processing toolkit. See DBLP:conf/acl/2014-d, pp. 55–60. External Links: Link, Document Cited by: §II-A2, §III-C1.
  • [8] S. Ren, K. He, R. Girshick, and J. Sun (2017-06) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), pp. 1137–1149. External Links: Document, ISSN 0162-8828 Cited by: §I.
  • [9] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Model-agnostic interpretability of machine learning. CoRR abs/1606.05386. External Links: Link, 1606.05386 Cited by: §I.
  • [10] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should I trust you?: explaining the predictions of any classifier. See DBLP:conf/kdd/2016, pp. 1135–1144. External Links: Link, Document Cited by: §I, §II.
  • [11] S. Ruping (2006) Learning interpretable models (phd thesis). In Technical University of Dortmund, Cited by: §I.
  • [12] R. E. Shawi, M. H. Al-Mallah, and S. Sakr (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med. Inf. & Decision Making 19 (1), pp. 146:1–146:32. External Links: Link, Document Cited by: §I.
  • [13] S. Shi, X. Zhang, H. Li, and W. Fan (2019) Explaining the predictions of any image classifier via decision trees. CoRR abs/1911.01058. External Links: Link, 1911.01058 Cited by: §I.
  • [14] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015-06) Going deeper with convolutions. In

    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Vol. , pp. 1–9. External Links: Document, ISSN Cited by: §III-B, §III.
  • [15] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. See DBLP:conf/iclr/2014, External Links: Link Cited by: §I.
  • [16] S. Tan, R. Caruana, G. Hooker, and Y. Lou (2018) Distill-and-compare: auditing black-box models using transparent model distillation. See DBLP:conf/aies/2018, pp. 303–310. External Links: Link, Document Cited by: §I.