1 Introduction and related work
Machine learning models with good predictability, such as deep neural networks, are difficult to interpret, opaque, and hence considered to be black box models. Deep neural networks have gained significant prominence in healthcare domain and are increasingly being used in critical tasks such as disease diagnosis, survival analysis, etc. As a result, there is a pressing need to understand these models to ensure that they are correct, fair, unbiased, and/or ethical.
Currently, there is no precise definition of interpretability, and the definitions tend to depend upon the application. Some of the most common types of model explanations are [10]:

Local. In these methods, one is interested in deriving explanations by fitting an interpretable model locally around the test instance of interest [12].
The interpretable and explainable methods can also be grouped based on other criteria, such as i) Model agnostic or model specific ii) Intrinsic or posthoc iii) Perturbation or saliency based, etc. Recently, posthoc explainable methods, such as LIME [12], have gained a lot of interest since explanations may currently be the only option for explaining already trained models. These methods are modelagnostic and hence do not require understanding of the inner workings of trained model.
Some of the desired characteristics of interpretable models are consistency and stability of explanations, and local fidelity or faithfulness of the model locally. Local posthoc methods, such as LIME, lack in this regard. In this paper, we propose a modification to the popular framework for generating local explanations LIME[12] that improves both stability and local fidelity of explanations.
Since our focus is on posthoc and local interpretable methods, we restrict our literature review to include only those methods. As mentioned above, LIME[12] is one the most popular methods in local interpretability. In LIME, artificial data points are first generated using random perturbation around the instance of interest to be explained, and then a linear model is fit around the instance. The same authors extended LIME to include global explanations in [13]. Most of the modifications to LIME have been in the line of selecting appropriate kind of data for training the local interpretable model. In [5]
, the authors use Kmeans clustering to partition the training dataset and use them for local models instead of perturbationbased data generation. In another work called DLIME
[15], the authors employ Hierarchical Clustering (HC) for grouping the training data and later use one of the clusters nearest to the instance for training the local interpretable model. These modifications are aimed at addressing the lack of "stability", which is one of the serious issues in the local interpretable methods. The issue of stability occurs in LIME because of the data generation using random perturbation. Suppose we select an instance to be explained, the LIME generates different explanations (local interpretable models with different feature weights) at every run for the same instance. The works such as
[15] use clusters from the training data itself to address this problem. Since the samples are always taken from the same cluster of the training set, there is no variability in the feature weights at different runs for a particular instance.Using the training set would make the black box model (the model for which we seek explanations) overfit by creating a table where each individual example in the training set is assigned to its class (i.e. exact matching of training instances), Therefore, using these results in training the local interpretable model would produce incorrect results, since the explanation instance of interest is from the test set and we do not have information as to how the black box model behaves upon encountering new data points as present in the test set. Thus, we ask the following question:
Can we improve the stability of local interpretable model while sampling randomly generated points?
In other words, we wish to see if we can still improve the stability by following the same sampling paradigm of LIME and therefore not using the training set for local interpretable model as done in [5, 15].
Another important issue in local intepretable methods is locality, which refers to the the neighbourhood around which the local surrogate is trained and in [8], the authors show that it is nontrivial to define the right neighbourhood and how it could impact the local fidelity of the surrogate. A straightforward way to improve the stability is to simply generate a large set of points and use it to train the local surrogate. Although doing so would improve stability, it also decreases local fidelity or local accuracy. Thus, we ask another question:
Can we improve the stability while simultaneously maintaining the local fidelity?
In this paper, we mainly focus on answering the questions above by introducing an autoencoderbased local interpretability model ALIME. Our contributions can be summarized as follows:

we propose a novel weighting function as a modification to LIME to address the issues of stability and local fidelity, and

we perform extensive experiments on three different healthcare datasets to study the effects and compare with LIME.
2 Methods
Since our model builds upon LIME, we begin by a short introduction to LIME, and then describe our proposed modifications.
2.1 Lime
Local surrogate models use interpretable models (such as ridge regression) in order to explain individual instance predictions of an already trained machine learning model, which could be a black box model. Local interpretable modelagnostic explanations (LIME) is a very popular recent work where, instead of training a global surrogate model, LIME trains a local surrogate model for individual predictions. LIME method generates a new dataset by first sampling from a distribution and later perturbing the samples. The corresponding predictions on this generated dataset given by the black box model are used as ground truth. On these pairs of generated samples and the corresponding black box predictions, an interpretable model is trained around the point of interest by weighting the proximity of the sampled instances to it. This new learned model has the constraint that it should be a good approximation of the black box model predictions locally, but it does not have to be a good global approximation. Formally, a local surrogate model with interpretability constraint is written as follows:
(1) 
The explanation model for instance is the model
(e.g. linear regression model) minimizing loss
(e.g. mean squared error), a measure of how close the explanation is to the prediction of the original model (e.g. a deep neural network model). The model complexity is denoted by . is the family of possible explanations which, in our case, is a linear ridge regression model. The proximity measure defines how large the neighborhood around instance x is that we consider for the explanation. In practice, LIME only optimizes the loss part. The algorithm for training local surrogate models is as follows:
Select the instance of interest for which an explanation is desired for a black box machine learning model.

Perturb the dataset and use the black box to make predictions for these new points.

Weight the new samples according to their proximity to the instance of interest by employing some proximity metric, such as euclidean distance.

Train a weighted, interpretable linear model, such as ridge regression, on the dataset.

Explain the prediction by interpreting the local linear model by analyzing the coefficients of the local linear model.
2.2 Alime
The highlevel block diagram of our approach is shown in Figure 1 and also described in Algorithm 2. Once the black box model is trained, we need to train a local interpretable model. Our first focus is on improving stability of the local interpretable model. For this, instead of generating data by perturbation every time for explaining an instance (as done in LIME), we generate a large number of data points beforehand by sampling from a Gaussian distribution. This has an added advantage, as we reduce the timecomplexity by reducing the sampling operations. However, since we need to train a local model, we must ensure that for a particular instance, only the generated data around the instance is used for training the interpretable model. For this we use an autoencoder [1, 14] and thus, the most important change comes from the introduction of autoencoder.
Autoencoder [1]
is a neural network used to compress high dimensional data into latent representations. It consists of two parts: an encoder and a decoder. The encoder learns to map the high dimensional input space to a latent vector space, and the decoder maps the latent vector space to the original uncompressed input space. We use a variant of autoencoder called denoising autoencoder
[14], where the input is corrupted by adding a small amount of noise and then trained to reconstruct the uncorrupted input. Looked in another way, denoising is used as a proxy task to learn latent representations. We train an autoencoder with the help of the training data to be used for building the black box model. We first standardize the training data and then corrupt the training data by adding a small amount of additive white Gaussian noise. Then, the autoencoder is made to reconstruct the uncorrupted version of the input using the standard loss. Once trained, we employ the autoencoder as a weighting function, i.e., instead of computing the euclidean distances for the generated data and the instance to be explained on the original input space, we compute the distance on the latent vector space. For this, we compute the latent embeddings for all the generated points and the instance to be explained, and compute the distance on the embedded space. We discard the points with a distance larger than a predefined threshold and then, for the selected data points, we weight the points by using an exponential kernel as a function of distance. This way, we ensure locality and, since the autoencoders have been shown to better learn the manifold of data, it also improves local fidelity.3 Experiments and Results
For the sake of experiments, we use three datasets belonging to healthcare domain from the UCI repository[4]:

Breast Cancer dataset [9]. A widely used dataset that consists of 699 patient observations and 11 features used to study breast cancer.

Hepatitis Patients dataset [3]. Dataset consisting of 20 features and 155 patient observations.

Liver Patients dataset [11]. Indian liver patient dataset consisting of 583 patient observations and 11 features used to study liver disease.
As a black box model, we train a simple feed forward neural network with a single hidden layer having
neurons and neurons in the output layer for the two classes, and train the network using binary cross entropy loss. For all the three datasets, we use split for training and testing. We obtained accuracies of , and on the above mentioned three datasets respectively. The sample results for instances from the three datasets are shown in Figure 2. The red bars in the figure show the negative coefficients, and green bars show the positive coefficients of the linear regression model. The positive coefficients show the positive correlation among the dependent and independent attributes. On the other hand, negative coefficients show the negative correlation among the dependent and independent attributes.Currently, there exists no suitable metric for proper comparison of the two different interpretable models. Since our focus is on local fidelity and stability, we define and employ suitable metrics for the two issues of focus. For local fidelity, the local surrogate model should fit the global black box model locally. To test this, we compute the score of the local surrogate model using the results from the black box model as the ground truth. This tells us how good the model has fit on the generated data points. We compute the mean scores considering all the points in the test set. We also test the local model for fidelity by computing the mean squared error (MSE) between the local model prediction and the black box model prediction for the instance of interest that is to be explained. We again compute mean MSE considering all the points in the test set for the three respective datasets. Additionally, to study the effects of the dataset size used for the local surrogate model, we vary the number of generated data points used for training the local surrogate model. The results for the local fidelity experiments are shown in Figure 3. It can be seen that in terms of both metrics, ALIME clearly outperforms LIME by providing a better local fit. The results seem consistent across the three datasets.
It is even more difficult to define a suitable metric for the interpretable model stability. Since the explanations are based on the surrogate model’s coefficients, we can compare the change in the values of the coefficients for multiple iterations. Randomly selecting a particular instance from the test set, we run both LIME and ALIME for iterations. Because of their nature, both methods sample different set of points at every iteration. Because of the different dataset used at every iteration, the coefficients’ values change. As a measure for stability, one of the things we compare is the standard deviations of the coefficients. For each feature, we first compute the standard deviation across iterations and then compute the average of standard deviations of all the features. We also compute the ratio of standard deviation to mean as another stability metric. The division by mean serves as normalization, since the coefficients tend to have varied ranges. Similar to the above, we study the effects of the size of the dataset used for the local surrogate model, and vary the number of generated data points used for the local interpretable model. For every size, we compute the average of the aforementioned two stability metrics across all features. The results are plotted in Figure 4. It should be noted that we only consider the absolute values of coefficients while computing the means and standard deviations. Again, it can be seen that the ALIME outperforms LIME in terms of both the metrics and across the three datasets.
4 Conclusion
In this paper, we proposed a novel approach for explaining the model predictions for tabular data. We built upon the LIME [12] framework and proposed modifications by employing an autoencoder as the weighting function to improve both stability and local fidelity. With the help of extensive experiments, we showed that our method yields in better stability as well as local fidelity. Although we have shown the results empirically, a more thorough analysis is needed to substantiate the improvements. In future, we would work on performing a theoretical analysis and also exhaustive empirical analysis spanning different types of data.
References
 [1] Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layerwise training of deep networks. In: Advances in neural information processing systems. pp. 153–160 (2007)
 [2] Bien, J., Tibshirani, R., et al.: Prototype selection for interpretable classification. The Annals of Applied Statistics 5(4), 2403–2424 (2011)
 [3] Diaconis, P., Efron, B.: Computerintensive methods in statistics. Scientific American 248(5), 116–131 (1983)
 [4] Dua, D., Graff, C.: UCI machine learning repository. http://archive.ics.uci.edu/ml (2017)
 [5] Hall, P., Gill, N., Kurka, M., Phan, W.: Machine learning interpretability with H2O driverless AI. http://docs.h2o.ai (February 2019)
 [6] Koh, P.W., Liang, P.: Understanding blackbox predictions via influence functions. In: Proceedings of the 34th International Conference on Machine LearningVolume 70. pp. 1885–1894. JMLR. org (2017)
 [7] Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: A joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (2016)
 [8] Laugel, T., Renard, X., Lesot, M.J., Marsala, C., Detyniecki, M.: Defining locality for surrogates in posthoc interpretablity. arXiv preprint arXiv:1806.07498 (2018)

[9]
Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Operations Research
43(4), 570–577 (1995)  [10] Molnar, C.: Interpretable machine learning. https://christophm.github.io/interpretablemlbook/ (2019)
 [11] Ramana, B.V., Babu, M.S.P., Venkateswarlu, N., et al.: A critical study of selected classification algorithms for liver disease diagnosis. International Journal of Database Management Systems 3(2), 101–114 (2011)

[12]
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. pp. 1135–1144. ACM (2016)

[13]
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: Highprecision modelagnostic explanations. In: ThirtySecond AAAI Conference on Artificial Intelligence (2018)
 [14] Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on Machine learning. pp. 1096–1103. ACM (2008)
 [15] Zafar, M.R., Khan, N.M.: DLIME: A deterministic local interpretable modelagnostic explanations approach for computeraided diagnosis systems. In: In proceeding of ACM SIGKDD Workshop on Explainable AI/ML (XAI) for Accountability, Fairness, and Transparency. ACM, Anchorage, Alaska (2019)
Comments
There are no comments yet.