The recent advance of machine learning methods is partly due to the widespread use of very complicated models, for instance deep neural networks. As an example, the Inception Network(Sze_Liu_Jia:2015) depends on approximately million parameters. While these models achieve and sometimes surpass human-level performance on certain tasks (image classification being one of the most famous), they are often perceived as black boxes, with little understanding of how they make individual predictions.
This lack of understanding is a problem for several reasons. First, it can be a source of catastrophic errors when these models are deployed in the wild. For instance, for any safety system recognizing cars in images, we want to be absolutely certain that the algorithm is using features related to cars, and not exploiting some artifacts of the images. Second, this opacity prevents these models from being socially accepted. It is important to get a basic understanding of the decision making process to accept it.
Model-agnostic explanation techniques aim to solve this interpretability problem by providing qualitative or quantitative help to understand how black-box algorithms make decisions. Since the global complexity of the black-box models is hard to understand, they often rely on a local point of view, and produce an interpretation for a specific instance. In this article, we focus on such an explanation technique: Local Interpretable Model-Agnostic Explanations (LIME, Rib_Sin_Gue:2016).
Our main goal in this paper is to provide theoretical guarantees for LIME. On the way, we shed light on some interesting behavior of the algorithm in a simple setting. Our analysis is based on the Euclidean version of LIME, called “tabular LIME.” Our main results are the following:
When the model to explain is linear, we compute in closed-form the average coefficients of the surrogate linear model obtained by TabularLIME.
In particular, these coefficients are proportional to the partial derivatives of the black-box model at the instance to explain. This implies that TabularLIME indeed highlights important features.
On the negative side, using the closed-form expressions we show that it is possible to make some important features disappear in the interpretation, just by changing a parameter of the method.
We also compute the local error of the surrogate model, and show that it is bounded away from in general.
2 LIME: Outline and notation
From now on, we will consider a particular model encoded as a function and a particular instance to explain. We make no assumptions on this function, e.g., how it might have been learned. We simply consider as a black-box model giving us predictions for all points of the input space. Our goal will be to explain the decision that this model makes for one particular instance .
As soon as is too complicated, it is hopeless to try and fit an interpretable model globally, since the interpretable model will be too simple to capture all the complexity of . Thus a reasonable course of action is to consider a local point of view, and to explain in the neighborhood of some fixed instance . This is the main idea behind LIME: To explain a decision for some fixed input , sample other examples around , use these samples to build a simple interpretable model in the neighborhood of , and use this surrogate model to explain the decision for .
One additional idea that makes a huge difference with other existing methods is to use discretized features of smaller dimension to build the local model. These new categorical features are easier to interpret, since they are categorical. In the case of images, they are built by using a split of the image into superpixels (Ren_Mal:2003). See Figure 1 for an example of LIME output in the case of image classification. In this situation, the surrogate model highlights the superpixels of the image that are the most “active” in predicting a given label.
Whereas LIME is most famous for its results on images, it is easier to understand how it operates and to analyze theoretically on tabular data
. In the case of tabular data, LIME works essentially in the same way, with a main difference: tabular LIME requires a train set, and each feature is discretized according to the empirical quantiles of this training set.
We now describe the general operation of LIME on Euclidean data, which we call TabularLIME. We provide synthetic description of TabularLIME in Algorithm 1, and we refer to Figure 2 for a depiction of our setting along a given coordinate. Suppose that we want to explain the prediction of the model at the instance . TabularLIME has an intricate way to sample points in a local neighborhood of . First, TabularLIME constructs empirical quantiles of the train set on each dimension, for a given number of bins. These quantile boxes are then used to construct a discrete representation of the data: if falls between and , it receives the value . We now have a discrete version of , say . The next step is to sample discrete examples in uniformly at random: for instance, means that TabularLIME sampled an encoding such that the first coordinate falls into the first quantile box, the second coordinate into the third, etc. TabularLIME
subsequently un-discretizes these encodings by sampling from a normal distribution truncated to the corresponding quantile boxes, obtainingnew examples . For example, for sample we now sample the first coordinate from a normal distribution restricted to quantile box , the second coordinate from quantile box , etc. This sampling procedure ensures that we have samples in each part of the space. The next step is to convert these sampled points to binary features, indicating for each coordinate if the new example falls into the same quantile box as . Here, would be . Finally, an interpretable model (say linear) is learned using these binary features.
2.2 Implementation choices and notation
LIME is a quite general framework and leaves some freedom to the user regarding each brick of the algorithm. We now discuss each step of TabularLIME in more detail, presenting our implementation choices and introducing our notation on the way.
Discretization. As said previously, the first step of TabularLIME is to create a partition of the input space using a train set. Intuitively, TabularLIME produces interpretable features by discretizing each dimension. Formally, given a fixed number of bins , for each feature , the empirical quantiles are computed. Thus, along each dimension, there is a mapping associating each real number to the index of the quantile box it belongs to. For any point , the interpretable features are then defined as a vector corresponding to the discretization of being the same as the discretization of . Namely, for all . Intuitively, these categorical features correspond to the absence or presence of interpretable components. The discretization process makes a huge difference with respect to other methods: we lose the obvious link with the gradient of the function, and it is much more complicated to see how the local properties of influence the result of the LIME algorithm, even in a simple setting. In all our experiments, we took
(quartile discretization, the default setting).
Sampling strategy. Along with , TabularLIME creates an un-discretization procedure . Simply put, given a coordinate and a bin index , samples a truncated Gaussian on the corresponding bin, with parameters computed from the training set. The TabularLIME sampling strategy for a new example amounts to (i) sample
a random variable such that the
are independent samples of the discrete uniform distribution on, and (ii) apply the un-discretization step, that is, return . We will denote by these new examples, and their discretized counterparts. Note that it is possible to take other bin boxes than those given by the empirical quantiles, the s are then sampled according to the frequency observed in the dataset. The sampling step of TabularLIME helps to explore the values of the function in the neighborhood of the instance to explain. Thus it is not so important to sample according to the distribution of the data, and a Gaussian sampling that mimics it is enough.
Assuming that we know the distribution of the train data, it is possible to use the theoretical quantiles instead of the empirical ones. For a large number of examples, they are arbitrary close (see, for instance, Lemma 21.2 in Van:2000). See Figure 3 for an illustration. It is this approach that we will take from now on: we denote the discretization step by and denote the quantiles by for and to mark this slight difference. Also note that, for every , we set the quantiles bounding , that is, (see Figure 2).
TabularLIME requires a train set, which is left free to the user. In spirit, one should sample according to the distribution of the train set used to fit the model . Nevertheless, this train set is rarely available, and from now on, we choose to consider draws from a
. The parameters of this Gaussian can be estimated from the training data that was used forif available. Thus, in our setting, along each dimension , the are the (rescaled) quantiles of the normal distribution. In particular, they are identical for all features. A fundamental consequence is that sampling the new examples s first and then discretizing has the same distribution as sampling first the bin indices s and then un-discretizing.
We choose to give each example the weight
where is the Euclidean norm on and is a bandwidth parameter. It should be clear that is a hard parameter to tune:
if is very large, then all the examples receive positive weights: we are trying to build a simple model that captures the complexity of at a global scale. This cannot work if is too complicated.
if is too small, then only examples in the immediate neighborhood of receive positive weights. Given the discretization step, this amounts to choosing for all . Thus the linear model built on top would just be a constant fit, missing all the relevant information.
Note that other distances than the Euclidean distance can be used, for instance the cosine distance for text data. The default implementation of LIME uses instead of , with bandwidth set to . We choose to use the true Euclidean distance between and the new examples as it can be seen as a smoothed version of the distance to and has the same behavior.
The final step in TabularLIME is to build a local interpretable model. Given a class of simple, interpretable models , TabularLIME selects the best of these models by solving
is a local loss function evaluated on the new examples, and is a regularizer function. For instance, a natural choice for the local loss function is the weighted squared loss
We saw in Section 1.1 different possibilities for . In this paper, we will focus exclusively on the linear models, in our opinion the easiest models to interpret. Namely, we set , with and . To get rid of the intercept , we now use the standard approach to introduce a phantom coordinate , and with and . We also stack the s together to obtain .
The regularization term is added to insure further interpretability of the model by reducing the number of non-zero coefficients in the linear model given by TabularLIME. Typically, one uses
regularization (ridge regression is the default setting of LIME) orregularization (the Lasso). To simplify the analysis, we will set in the following. We believe that many of the results of Section 3 stay true in a regularized setting, especially the switch-off phenomenon that we are going to describe below: coefficients are even more likely to be set to zero when .
In other words, in our case TabularLIME performs weighted linear regression
weighted linear regressionon the interpretable features s, and outputs a vector such that
Note that is a random quantity, with randomness coming from the sampling of the new examples . It is clear that from a theoretical point of view, a big hurdle for the theoretical analysis is the discretization process (going from the s to the s).
Regression vs. classification.
To conclude, let us note that TabularLIME can be used both for regression and classification. Here we focus on the regression mode: the outputs of the model are real numbers, and not discrete elements. In some sense, this is a more general setting than the classification case, since the classification mode operates as TabularLIME for regression, but with chosen as the function that gives the likelihood of belonging to a certain class according to the model.
2.3 Related work
Let us mention a few other model-agnostic methods that share some characteristics with LIME. We refer to Gui_Mon_Rug:2019 for a thorough review.
Following Sha:1953 the idea is to estimate for each subset of features the expected prediction difference when the value of these features are fixed to those of the example to explain. The contribution of the th feature is then set to an average of the contribution of over all possible coalitions (subgroups of features not containing ). They are used in some recent interpretability work, see Lun_Lee:2017 for instance. It is extremely costly to compute, and does not provide much information as soon as the number of features is high. Shapley values share with LIME the idea of quantifying how much a feature contributes to the prediction for a given example.
Also related to LIME, gradient-based methods as in Bae_Sch_Har:2010 provide local explanations without knowledge of the model. Essentially, these methods compute the partial derivatives of at a given example. For images, this can yield satisfying plots where, for instance, the contours of the object appear: a saliency map (Zei_Fer:2014). Shr_Gre_Shc:2016; Shr_Gre_Kun:2017 propose to use the “input derivative” product, showing advantages over gradient methods. But in any case, the output of these gradient based methods is not so interpretable since the number of features is so high. LIME gets around this problem by using a local dictionary with much smaller dimensionality than the input space.
3 Theoretical value of the coefficients of the surrogate model
We are now ready to state our main result. Let us denote by the coefficients of the linear surrogate model obtained by TabularLIME. In a nutshell, when the underlying model is linear, we can derive the average value of the coefficients. In particular, we will see that the s are proportional to the partial derivatives . The exact form of the proportionality coefficients is given in the formal statement below, it essentially depends on the scaling parameters
and the s, the quantiles left and right of the s.
Theorem 3.1 (Coefficients of the surrogate model, theoretical values).
Assume that is of the form , and set
where, for any , we defined
Then, with high probability greater than
. Then, with high probability greater than, it holds that
A precise statement with the accurate dependencies in the dimension and the constants hidden in the result can be found in the Appendix (Theorem 10.1). Before discussing the consequences of Theorem 3.1 in the next section, remark that since is encoded by , the prediction of the local model at , , is just the sum of the s. According to Theorem 3.1, will be close to this value, with high probability. Thus we also have a statement about the error made by the surrogate model in .
Corollary 3.1 (Local error of the surrogate model).
Let . Then, under the assumptions of Theorem 3.1, with probability greater than , it holds that
with hidden constants depending on and the s.
Obviously the goal of TabularLIME is not to produce a very accurate model, but to provide interpretability. The error of the local model can be seen as a hint about how reliable the interpretation might be.
4 Consequences of our main results
Dependency in the partial derivatives.
A first consequence of Theorem 3.1 is that the coefficients of the linear model given by TabularLIME are approximately proportional to the partial derivatives of at , with constant depending on our assumptions. An interesting follow-up is that, if depends only on a few features, then the partial derivatives in the other coordinates are zero, and the coefficients given by TabularLIME for these coordinates will be as well. For instance, if as in Figure 4, then , , and for all . In a simple setting, we thus showed that TabularLIME does not produce interpretations with additional erroneous feature dependencies. Indeed, when the number of samples is high, the coordinates which do not influence the prediction will have a coefficient close to the theoretical value in the surrogate linear model. For a bandwidth not too large, this dependency in the partial derivatives seems to hold to some extent for more general functions. See for instance Figure 6, where we demonstrate this phenomenon for a kernel regressor.
Robustness of the explanations. Theorem 3.1 means that, for large , TabularLIME outputs coefficients that are very close to with high probability, where is a vector that can be computed explicitly as per Eq. (3.1). Still without looking too closely at the values of , this is already interesting and hints that there is some robustness in the interpretations provided by TabularLIME: given enough samples, the explanation will not jump from one feature to the other. This is a desirable property for any interpretable method, since the user does not want explanations to change randomly with different runs of the algorithm. We illustrate this phenomenon in Figure 5.
Influence of the bandwidth.
Unfortunately, Theorem 3.1 does not provide directly a founded way to pick
, which would for instance minimize the variance for a given level of noise. The quest for a founded heuristic is still open. However, we gain some interesting insights on the role of. Namely, for fixed , , and , the multiplicative constants appearing in Eq. (3.1) depend essentially on .
Without looking too much into these constants, one can already see that they regulate the magnitude of the coefficients of the surrogate model in a non-trivial way. For instance, in the experiment depicted in Figure 4, the partial derivative of along the two first coordinate has the same magnitude, whereas the interpretable coefficient is much larger for the first coordinate than the second. Thus we believe that the value of the coefficients in the obtained linear model should not be taken too much into account.
More disturbing, it is possible to artificially (or by accident) put to zero, therefore forgetting about feature in the explanation, whereas it could play an important role in the prediction. To see why, we have to return to the definition of the s: since by construction, to have is possible only if
and is set to . We demonstrate this switching-off phenomenon in Figure 7. An interesting take is that not only decides at which scale the explanation is made, but also the magnitude of the coefficients in the interpretable model, even for small changes of .
Error of the surrogate model.
A simple consequence of Corollary 3.1 is that, unless some cancellation happens between in the term , the local error of the surrogate model is bounded away from zero. For instance, as soon as , it is the general situation. Therefore, the surrogate model produced by TabularLIME is not accurate in general. We show some experimental results in Figure 8.
Finally, we discuss briefly the limitations of Theorem 3.1.
Linearity of .
The linearity of is a quite restrictive assumption, but we think that it is useful to consider for two reasons.
First, the weighted nature of the procedure means that TabularLIME is not considering examples that are too far away from with respect to the scaling parameter . Thus it is truly a local assumption on , that could be replaced by a boundedness assumption on the Hessian of in the neighborhood of , at the price of more technicalities and assuming that is not too large. See, in particular, Lemma 11.3 in the Appendix, after which we discuss an extension of the proof when is linear with a second degree perturbative term. We show in Figure 6 how our theoretical predictions behave for a non-linear function (a kernel ridge regressor).
Second, our main concern is to know whether TabularLIME operates correctly in a simple setting, and not to provide bounds for the most general possible. Indeed, if we can already show imperfect behavior for TabularLIME when is linear as seen earlier, our guess is that such behavior will only worsen for more complicated .
In our derivation, we use the theoretical quantiles of the Gaussian distribution along each axis, and not prescribed quantiles. We believe that the proof could eventually be adapted, but that the result would loose in clarity. Indeed, the computations for a truncated Gaussian distribution are far more convoluted than for a Gaussian distribution. For instance, in the proof of Lemma8.1 in the Appendix, some complicated quantities depending on the prescribed quantiles would appear when computing .
5 Proof of Theorem 3.1
In this section, we explain how Theorem 3.1 is obtained. All formal statements and proofs are in the Appendix.
The main idea underlying the proof is to realize that is the solution of a weighted least squares problem. Denote by the diagonal matrix such that (the weight matrix), and set the response vector. Then, taking the gradient of Eq. (5.1), one obtains the key equation
Let us define and , as well as their population counterparts and . Intuitively, if we can show that and are close to and , assuming that is invertible, then we can show that is close to .
The main difficulties in the proof come from the non-linear nature of the new features , introducing tractable but challenging integrals. Fortunately, the Gaussian sampling of LIME allows us to overcome these challenges (at the price of heavy computations).
The first part of our analysis is thus concerned with the study of the empirical covariance matrix . Perhaps surprisingly, it is possible to compute the population version of :
Since the s are always distinct from and , the special structure of makes it possible to invert it in closed-form. We show in Lemma 8.2 that
We then achieve control of via standard concentration inequalities, since the new samples are Gaussian and the binary features are bounded (see Proposition 8.1).
Right-hand side of Eq. (5.1).
Again, despite the non-linear nature of the new features, it is possible to compute the expected version of in our setting. In this case, we show in Lemma 9.1 that
where the s were defined in Section 3. They play an analogous role to the s but, as noted before, they are signed quantities. As with the analysis of the covariance matrix, since the weights and the new features are bounded, it is possible to show a concentration result for (see Lemma 9.3).
Concluding the proof.
We can now conclude, first upper bounding by
and then controlling each of these terms using the previous concentration results. The expression of is simply obtained by multiplying and .
6 Conclusion and future directions
In this paper we provide the first theoretical analysis of LIME, with some good news (LIME discovers interesting features) and bad news (LIME might forget some important features and the surrogate model is not faithful). All our theoretical results are verified by simulations.
For future work, we would like to complement these results in various directions: Our main goal is to extend the current proof to any function by replacing by its Taylor expansion at . On a more technical side, we would like to extend our proof to other distance functions (e.g., distances between the s and , which is the default setting of LIME), to non-isotropic sampling of the s (that is, not constant across the dimensions), and to ridge regression.
The authors would like to thank Christophe Biernacki for getting them interested in the topic, as well as Leena Chennuru Vankadara for her careful proofreading. This work has been supported by the German Research Foundation DFG (SFB 936/ Z3), and the Institutional Strategy of the University of Tübingen (DFG ZUK 63).
Supplementary material for:
Explaining the Explainer: A First Theoretical Analysis of LIME
In this supplementary material, we provide the proof of Theorem 3.1 of the main paper. It is a simplified version of Theorem 10.1. We first recall our setting in Section 7. Then, following Section 5 of the main paper, we study the covariance matrix in Section 8, and the right-hand side of the key equation (5.1) in Section 9. Finally, we state and prove Theorem 10.1 in Section 10. Some technical results (mainly Gaussian integrals computation) and external concentration results are collected in Section 11.
H 1 (Linear ).
The black-box model can be written , with and fixed.
H 2 (Gaussian sampling).
The random variables are i.i.d. .
Also recall that, for any , we set the weights to
We will need the following scaling constant:
which does not play any role in the final result. One can check that when , regardless of the dimension.
Finally, for any , recall that we defined
where are the quantile boundaries of . These coefficients are discussed in Section 5 of the main paper. Note that all the expected values are taken with respect to the randomness on the .
8 Covariance matrix
In this section, we state and prove the intermediate results used to control the covariance matrix . The goal of this section is to obtain the control of in probability. Intuitively, if this quantity is small enough, then we can inverse Eq. (5.1) and make very precise statements about .
We first show that it is possible to compute the expected covariance matrix in closed form. Without this result, a concentration result would still hold, but it would be much harder to gain precise insights on the s.
Lemma 8.1 (Expected covariance matrix).
Under Assumption 2, the expected value of is given by
Elementary computations yield
Reading the coefficients of this matrix, we have essentially three computations to complete: , , and .
Computation of .
By independence across coordinates, the last display amounts to
We then apply Lemma 11.1 to each of the integrals within the product to obtain
We recognize the definition of the scaling constant (Eq. (7.2)): we have proved that .
Computation of .
By independence across coordinates, the last display amounts to
Using Lemma 11.1, we obtain
Computation of .
By independence across coordinates, the last display amounts to
Using Lemma 11.1, we obtain
As it turns out, we show that it is possible to invert in closed-form, therefore simplifying tremendously our quest for control of . Indeed, in most cases, even if concentration could be shown, one would not have a precise idea of the coefficients of .
Lemma 8.2 (Inverse of the covariance matrix).
If for any , then is invertible, and
Define the vector of the s. Set , , , and
Then is a block matrix that can be written . We notice that
Note that, since is an increasing function, the s are always distinct from and . Thus
is an invertible matrix, and we can use the block matrix inversion formula to obtain the claimed result. ∎
As a direct consequence of the computation of
, we can control its largest eigenvalue.
Lemma 8.3 (Control of ).
We have the following bound on the operator norm of the inverse covariance matrix:
We control the operator norm of by its Frobenius norm: Namely,
where we used in the last step of the derivation. ∎
Better bounds can without doubt be obtained. A step in this direction is to notice that is an arrowhead matrix (OLe_Ste:1996). Thus the eigenvalues of are solutions of the secular equation
Further study of this equation could yield an improved statement for Lemma 8.3.
We now show that the empirical covariance matrix concentrates around . It is interesting to see that the non-linear nature of the new coordinates (the s) calls for complicated computations but allows us to use simple concentration tools since they are, in essence, Bernoulli random variables.
Lemma 8.4 (Concentration of the empirical covariance matrix).
Let and be defined as before. Then, for every ,
Recall that : it suffices to show the result for the Frobenius norm. Next, we notice that the summands appearing in the entries of , , , and , are all bounded. Indeed, by the definition of the weights and the definition of the new features, they all take values in . Moreover, for given , they are independent random variables. Thus we can apply Hoeffding’s inequality (Theorem 11.1) to , , and . For any given , we obtain
We conclude by a union bound on the entries of the matrix. ∎
As a consequence of the two preceding lemmas, we can control the largest eigenvalue of .
Lemma 8.5 (Control of ).
For every , with probability greater than ,