The ability to correctly interpret a prediction model is increasingly important as we move to widespread adoption of machine learning methods, in particular within safety critical domains such as health care(Holzinger et al., 2017; Gade et al., 2020). In this paper, we consider the task of attributing the features of a complex machine learning model , abstracted as a function that predicts a response given a test instance , given only black-box access to the model. We especially focus on model-agnostic local explanation methods and the two most popular representatives of this group of models, namely LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017). As these methods are often described as fitting a local surrogate model to the black box (Roscher et al., 2020), a natural question is: how ‘local’ are local explanation methods?
As a simple motivating example as to why this question matters, consider a black box model given by where denotes the indicator function. When attributing the local feature importance at a test instance , with fixed at 2, we would expect Feature-1 to receive a higher absolute attribution when is closer to the decision boundary at . In Figure 1 we report the results on this example from LIME and SHAP as well as for our proposed ‘Neighbourhood SHAP’ approach. We observe that Neighbourhood SHAP assigns Feature-1 a smaller attribution, the higher the absolute value of is. SHAP and LIME, however, assign Feature-1 an attribution which is constant either side of which illustrates that these methods capture global model behaviour. The figure also shows that training a local linear approximation to the black box (Rasouli and Yu, 2019; Botari et al., 2020) is misleading since Feature-2 receives a significantly positive attribution for , even though Feature-2 contributes clearly negatively to the model outcome whenever .
This motivates the following contributions
We propose Neighbourhood SHAP (Section 3) which considers local reference populations for prediction points as a complimentary approach to SHAP. By doing so, we show that the Nadaraya-Watson estimator at can be interpreted as an importance sampling estimator where the expectation is taken over the proposed neighbourhood. Empirically, we find that greater locality increases the number of model evaluations on the data manifold and with this the robustness of the attributions against adversarial attacks.
We consider how smoothing can also be used to stabilise SHAP values (Section 4). We quantify the loss in information incurred by our smoothing procedure and characterise its Lipschitz continuity.
We begin with a short introduction to Shapley values – the quantity of interest of the SHAP optimisation procedure. For a pre-defined value function that takes a set of features as input, the Shapley value of feature measures the expected change in the value function from including feature into a random subset of features (without )
where the expectation is taken over the feature coalitions whose distribution is defined by
. This choice of probability distribution ensures that sampling a set of size
has the same probability as sampling one of size, for .
The choice of value function for explanation-based modelling of feature attributions at an instance has been the subject of recent debates (Aas et al., 2019; Janzing et al., 2020; Merrick and Taly, 2020). The consensus is to take the expectation of the black box algorithm at observation over the not-included features using a reference distribution such that
for and the operation denoting the concatenation of its two arguments. Marginal Shapley values (Lundberg and Lee, 2017; Janzing et al., 2020) define where denotes the marginal data distribution. Conditional Shapley values (Aas et al., 2019) set the reference distribution equal to the conditional distribution given , . All in all, the Shapley value is characterised by the expected change in model output, comparing the output when we include in the model, i.e. integrate out some randomly sampled features , with the model output where feature is not included, i.e. we integrated out some randomly sampled features including ,
As we see, Shapley values are computed by estimating the change in model outcome when some features are integrated out over the reference distribution , which has so far been defined as either the marginal or conditional global population. For marginal Shapley values, the interpretation simplifies: The Shapley value of feature is the expected change in model outcome when we sample a random individual from the global statistical population and set its feature equal to (after we already set a random set of features equal to ). This motivates our proposal in Section 3 of neighbourhood distributions where we instead sample a random individual from the immediate neighbourhood of , as outlined in the next section.
Computing Shapely values is challenging in high-dimensional feature spaces, which motivates the widely adopted KernelSHAP approach (Lundberg and Lee, 2017) that estimates the Shapley values of all features by empirical risk minimisation of
where is a linear explanation model with the Shapley values as its coefficients, are i.i.d. draws from the respective global reference distributions, is a set of sampled coalitions, and the weights are defined by the KernelSHAP weights (Lundberg and Lee, 2017). LIME optimises a similar generalised expectation – also sampling references from a global distribution. To improve local fidelity of Tabular LIME, Ribeiro et al. (2016) propose to define the weights as for a bandwidth . While this weighting increases the importance of proportional to the size of , it however does not ensure that higher weights are assigned to model evaluations for observations closer to .
A simple solution to the locality problem is to fit a local linear approximation in the form of a tangent line that predicts the black box in a small neighbourhood around , as in (Rasouli and Yu, 2019; Botari et al., 2020; Visani et al., 2020; White and Garcez, 2019). Such an approach has however several drawbacks compared to SHAP (and thus Neighbourhood SHAP) such as higher instability, less interpretability, and assuming a fixed parametric form. While SHAP (and Neighbourhood SHAP) does not make any assumptions on the form of in the feature space, local linear approximations assume linearity of the black box in a neighbourhood. As a consequence, this may result in misleading attributions, as was demonstrated in Figure 1. See Supplement LABEL:sec:suppl_linear for a detailed discussion of local approximating models versus local reference populations.
3 Neighbourhood SHAP
Shapley values – similarly to other feature removal methods – employ a global reference distribution when computing attributions. This can lead to surprising artefacts as illustrated in Figure 2. To increase the local fidelity of Shapley values, we propose to sample from a well-defined local reference population instead. Having selected a distance metric
, such as the Euclidean distance or the more powerful Random Forests(Bloniarz et al., 2016), we define a distance-based distribution that is centred around , such as the exponential kernel . Further, we define the local neighbourhood distribution as where can be any marginal or conditional reference distribution and is the normalising constant. This choice ensures that we sample neighbourhood values not only considering the metric space but also the data distribution. This leads to a proposed change to the optimisation problem of eq. (1) to the following Neighbourhood SHAP minimisation
Instead of estimating the neighbourhood distribution, we approximate the expectation of the model outcome in the neighbourhood around using self-normalised importance sampling (Doucet et al., 2001) with proposal distribution
While our proposal, Neighbourhood SHAP, weights the based on a distance metric to , KernelSHAP uses uniform weights, i.e. . We note that the proposed local neighbourhood sampling scheme has a convenient form which corresponds to the well-known Nadaraya-Watson estimator (Nadaraya, 1964; Watson, 1964; Ruppert and Wand, 1994) used for kernel regression. Kernel regression is a non-parametric technique to model the non-linear relationship between a dependent variable (here, ) and an independent variable (here, ), by approximating the conditional expectation (here, ).
While the form of the Nadaraya-Watson estimator has so far been justified from a kernel theory perspective (Supplement LABEL:sec:suppl_nwe), we show that it can be interpreted as an importance sampling estimator. The Nadaraya-Watson estimator , where is a kernel function, is an unbiased self-normalised importance sampling estimator of with proposal distribution and desired distribution proportional to . As pointed out in Supplement LABEL:sec:suppl_axioms, all Shapley axioms (Lundberg and Lee, 2017; Sundararajan and Najmi, 2020) still hold true for the Neighbourhood SHAP. Now, by linearity, we can quantify the difference between SHAP and Neighbourhood SHAP as ‘Anti-Neighbourhood SHAP’ (see Supplement LABEL:sec:suppl_antinbrh
). Looking at this difference might be of value to characterise the information loss when contrasting an instance to the global population instead of to a local neighbourhood. Finally, we also derive a variance estimator of Shapley values computed with the Shapley formula in SupplementLABEL:sec:suppl_var.
A major disadvantage of marginal Shapley values and LIME is that the concatenated data vectorsfor a sampled reference do not necessarily lie on the data manifold (Frye et al., 2020; Chen et al., 2020). This has two serious ramifications: 1) the model is evaluated in regions that lie off the data manifold where it might behave unexpectedly, and be unrepresentative for the data population; and 2) adversaries can use an out-of-distribution (OOD) classifier trained to distinguish real-data from simulated concatenated data and, through this, construct a model whose Shapley values look fair even though the model is demonstrably unfair on the real-data domain (Slack et al., 2020). To circumvent this problem, Frye et al. (2019); Aas et al. (2019); Covert et al. (2020) propose the use of conditional instead of marginal reference distributions. However, using conditional reference distributions changes the interpretability of Shapley values – i.e. unrelated features get a non-zero attribution – and thus, their use is controversial (Janzing et al., 2020). A marginal Neighbourhood SHAP approach in contrast can achieve on-manifold explainability while keeping the properties of marginal Shapley values for small enough if the data manifold is to some extent coherent (see Figure 3).
Choice of Bandwidth.
For , Neighbourhood SHAP will be equal to KernelSHAP, while it converges to 0 for . Small neighbourhoods thus induce regularisation in the predictions which we also observe empirically in Section 5. While SHAP values add up to , Neighbourhood SHAP attributions add up to
. Hence, care needs to be taken when comparing SHAP and Neighbourhood SHAP, since the scales might differ. In this case, both SHAP values (standard and neighbourhood) can be divided by either the sum of their absolute values or by their standard deviation, to represent relative attribution measures. As commonly observed with kernel regression approaches, there are some drawbacks, such as the additional hyperparameters (distance function, bandwidth) and increased variability especially in data sparse regions for small bandwidths. These problems can be tackled by choosing adaptive bandwidth methods. For instance,could be chosen such that the 25% closest observations to are not assigned more than 75% of the weight mass. We propose to plot the Neighbourhood SHAP values of the normalised features over a range of bandwidths, from . This provides a powerful diagnostic and information tool.
The computational burden of changing is not as large as it might first appear. Our importance sampling approach has the desirable property that is estimated on the same set of references for each , and that only the importance weights vary with the bandwidth. As a result, there are no additional model evaluations required when Neighbourhood SHAP is computed for a different . This stands in contrast to other neighbourhood schemes proposed in the XAI literature such as KDEs (Botari et al., 2019), GANs (Saito et al., 2020) or Gaussian perturbations (Robnik-Šikonja and Bohanec, 2018) where the black box must be evaluated an additional times for each new bandwidth where denotes the number of sampled coalitions. Please refer to Supplement LABEL:sec:suppl_comp for a theoretical and empirical complexity analysis.
4 Smoothed SHAP
In the previous section, we discussed neighbourhood sampling as a useful tool to understand feature relevance through feature removal. We have also seen that the proposed neighbourhood sampling approach relates to kernel smoothers such as the Nadaraya-Watson estimator. This result can give us insights to consider a Smoothed SHAP that locally averages neighbouring SHAP values
where are samples from the reference distribution and is a kernel function. Such smoothing procedures have been applied before in the explainability literature, e.g. for gradient-based methods (Smilkov et al., 2017; Yeh et al., 2019), and can be of interest when the interpretability of SHAP values suffers under the high instability of the black box (Alvarez-Melis and Jaakkola, 2018; Ghorbani et al., 2019; Hancox-Li, 2020). The smoothing it induces can be captured by a Lipschitz constant whose upper bound decreases with the bandwidth . For every with , there exists a constant such that for the smoothed Shapley value estimator (2) with if is bounded on where is a function that decreases in with as . With the tools from before, we can derive that Smoothed SHAP is an unbiased importance sampling estimator of the SHAP values from the neighbourhood around
where the new value function is defined by . This smoothed value function relates to the explicit modelling of feature inclusion and gives an interesting perspective on the meaning of smoothing, namely that is a measurement of the test instance variable . Exploring a smoothed summary of the SHAP values in the local neighbourhood around highlights how local variability in drives changes in the SHAP feature attributions. This is interesting in its own right but particularly so if features are susceptible to reporting error. As an illustration, consider a black box algorithm that predicts the fitness level of an adult based on multiple covariates, including weight. The reported weight may be subject to error if unreliable scales are used. In addition, as weight varies constantly throughout the day, the individual might not be interested in the attribution for one particular weight at a single point in time, but rather in the attribution that a range of weights per day receives. The test instance is thus more appropriately described by a test distribution of around where
is a random variable that describes the volatility in the covariates of the test instance. If the test distribution is unknown, it can be estimated by setting it, as earlier, equal to a neighbourhood distributionwhere encapsulates the prior belief on the variability of and captures the artefacts of the data distribution (i.e. skew, curtosis, high density regions). The kernel can now be defined with a multivariate bandwidth . We can observe empirically that such a choice can decrease the MSE of the estimation of Shapley values (Supplement LABEL:sec:suppl_exp). Building upon results from kernel regression, we can quantify the squared distance of Smoothed SHAP to (Supplement LABEL:sec:suppl_smooth). Finally, we also derive a variance estimator for Smoothed SHAP in Supplement LABEL:sec:suppl_var.
Choice of Smoothing Bandwidth.
Prior information on the variability of the covariates of the test instance can be included in the definition of the bandwidth matrix. Fixed covariates, like age or season, are not expected to change and thus receive a bandwidth , while volatile features like weight, temperature or windspeed are assigned a positive bandwidth. For bandwidths , the feature is treated as inherently missing. If for all features , Smoothed SHAP equals the average of the Shapley values over all references which is often used as a global explanation measure (Frye et al., 2020; Covert et al., 2020; Bhargava et al., 2020). As Smoothed SHAP can be estimated efficiently once SHAP values have been computed for the reference population, we propose, again, computing it for several bandwidth choices, and using a plot with respect to the bandwidth as a visualisation technique to help inform the choice of bandwidth. The bandwidth induces a bias-variance trade-off as derived in Supplement LABEL:sec:suppl_smooth: the larger the bandwidth, the smoother the results, but also the less Smoothed SHAP reflects the model behaviour at , especially if is highly non-linear.
Connection to LIME.
provides the same explanation for any two instances falling into the same quantile along each dimension(Garreau and von Luxburg, 2020). As such it is also an aggregated attribution measure, similar to Smoothed SHAP. Key differences are the treatment of different dimensions and no proven guarantees of Lipschitz continuity (see Supplement LABEL:sec:suppl_lime).
We present comprehensive experiments on several standardised real-world tabular UCI data sets (Asuncion and Newman, 2007)
of different sizes predicted with ensemble classifiers or regressors, as well as an image classification task on the MNIST dataset. The experiments demonstrate some key attributes of Neighbourhood and Smoothed SHAP including: Neighbourhood SHAP increases on-manifold explainability and robustness against adversarial attacks; Neighbourhood SHAP also leads to sparser attributions than standard Shapley values; Smoothed SHAP tells us how Shapley values of neighbouring observations differ from the attribution of the test instance.
Since Neighbourhood SHAP, Smoothed SHAP and SHAP operate on different scales, we divide all attributions by their standard deviation (over features) unless otherwise specified. We present a subset of our results in this Section and refer the interested reader to Supplement LABEL:sec:suppl_exp for a thorough report of all experimental results (including simulated experiments), details and hyper-parameter settings.
On-Manifold Explainability and Robustness against Adversarial Attacks.
For adversarial learning, we train a Random Forest and a LightGBM as OOD classifiers that distinguish true data from concatenated vectors used for model evaluations. We find that for small bandwidths , the adversary is not able to distinguish between the test data and the concatenated test data (Figure 3(a)), leading to a deterioration in their ability to discriminate true from concatenated vectors. Under the assumption that the classifiers are able to detect the true data manifold, we can thus claim that Neighbourhood SHAP relies more on observations from the data manifold than SHAP and LIME. Further, we mimicked the experimental setup of Slack et al. (2020) on the COMPAS data set (Angwin et al., 2016): an adversarial black box predicts recidivism based only on race if the data is predicted from the OOD classifier to be from the data manifold, and returns an unrelated column if it is not. As presented in Figure 3(b) for 10 randomly sampled individuals, the unrelated column has no effect on Neighbourhood SHAP and race has a higher relative attribution than it does for KernelSHAP.
Increased Local Prediction Accuracy.
As SHAP learns a binary feature model , we can sample feature coalitions and reference values to perturb test data and predict the model outcome at the perturbed data. To check local accuracy, we weight the reference values with an exponential kernel. Its bandwidth signifies the size of the neighbourhood. Figure 5
presents the MSE corresponding to an XGBoost model, applied to four different datasets. As expected, Neighbourhood SHAP with a smaller bandwidth predicts data within a small neighbourhood significantly better than Neighbourhood SHAP with a larger bandwidth. Here we noticed that the difference between the bandwidths is larger where there are fewer features in the data set (such as theiris dataset). We attribute the loss in performance to the difficulty of estimating meaningful distances in high dimensions.
Interpretation of Neighbourhood SHAP.
Neighbourhood SHAP computed with small kernel widths reflects feature attributions when contrasting with model behaviour at similar observations, whereas Neighbourhood SHAP computed with large kernel widths renders model behaviour contrasting at a population scale. Figure 6
shows the evolution of Neighbourhood SHAP across bandwidths on randomly picked observations across different data sets. The test instance in the bike data set, where a XGBoost regressor predicts daily bike rentals, has a high normalised temperature of 0.82. As the median observation has a temperature of 0.50, the neighbourhood of our test instance is expected to look considerably different to the global population. For small kernel widths, Neighbourhood SHAP computes a negative attribution for temperature, whereas marginal SHAP is positive. This sign ‘flip’ is coherent with descriptive statistics: for a subpopulation with temperatures +/-0.05 around 0.82, temperature is negatively correlated with outcome (correlation equal to -0.08) whereas overall, bike rental tends to increase on warmer days (unconditional correlation equal to +0.47). Neighbourhood SHAP thus shows that a warmer temperature has in general a positive impact on the count of rental bikes, which reverses for very hot days. Standard Shapley values do not provide this type of fine-grained interpretation. Similarly, in the Boston data set (Figure6, third column), our test instance is a dwelling with a high percentage of lower status population (LSTAT) equal to 18.76%. LSTAT gets positive Neighbourhood SHAP values for small kernel widths, whereas its marginal Shapley value is negative. This observation is consistent with the negative overall correlation, which is equal to -0.76, whereas for a restricted population with LSTAT +/- 1% it is equal to +0.15. For similar dwellings i.e. with a high pupil-teacher ratio and many rooms, lower status populations do not decrease the value of the home as much as they do in general, and can even increase it.
Interpretation of Smoothed SHAP.
In contrast, Smoothed SHAP summarises marginal Shapley values (which contrast against the entire population) within a neighbourhood, instead of at a single instance. For example, consider the adult data set (Figure 6, first column). We chose a test instance for which the model performs poorly: its predicted probability of high income for this individual, aged 42, is equal to 0.09, when in actual fact the person has a high income. It is interesting to contrast the conventional Shapley value assigned to the person, which is obtained by Smoothed SHAP with a , with the average Shapley values for individuals like them. We observe that Smoothed SHAP quickly assigns a negative attribution to age and a positive attribution to education for , whilst SHAP values were positive and negative, respectively for the individual. This highlights local instability in the Shapley values, as the SHAP numbers for people similar to the predicted person are positive for education, and negative for the age feature. For the Boston data set we note that Smoothed SHAP of the Pupil/Teacher Ratio (PTRATIO) initially decreases for a small , as there are many dwellings with a high PTRATIO in the data neighbourhood of the test instance, while it then increases as the global attribution of this feature is in general higher.
We applied our Neighbourhood SHAP approach on KernelSHAP and also on DeepSHAP (Lundberg and Lee, 2017)
which computes Shapley values for images based on gradients. After training a convolutional neural network on the MNIST data set, we explain digits with the predicted label ’8’ given a background data set of 100 images with labels ’3’ and ’8’. As we see in Figure7
, Neighbourhood DeepSHAP gives pixels close to the strokes attributions with the highest absolute values while DeepSHAP assigns less sparse and more blurry attributions. This is expected: DeepSHAP compares each digit to a random digit in the population, while Neighbourhood DeepSHAP only looks at images in the neighbourhood, i.e. to images which have a similar stroke. As we show in the Supplement, the change in log odds of predicting a ’3’ after modifying the images depicting an ’8’ with the attributions (setting blue pixels to 0) is (non-significantly) higher for Neighbourhood DeepSHAP than it is for DeepSHAP. In contrast, Smoothed DeepSHAP leads to a smaller change in log-odds which is expected since we lose information by smoothing. However we see that Smoothed DeepSHAP gives additional insights compared to DeepSHAP and Global DeepSHAP: In all images the lower left corner of the 8s is highlighted in blue only for Smoothed DeepSHAP. Thus, we know that there is at least one observation in the neighbourhood of these 8s that has a strong negative attribution in that image area. This image however loses importance when aggregating over the whole data set. Note that LIME gives the sharpest results because we chose the hyperparameters such that the image is split into the largest number of super pixels. We however see that LIME gives counter-intuitive results (i.e. lower right corner of the third 8 gets the lowest attribution, lower contour of first 8 gets highest attribution).
In this paper, we first highlighted the limitations of using SHAP when the local model behaviour is of interest. We then introduced Neighbourhood SHAP. While neighbourhood sampling has been applied in other areas of model explainability, such as image perturbations by adding noise (Fong and Vedaldi, 2017), local linear approximations (see Supplements LABEL:sec:suppl_linear), or rule-based models (Guidotti et al., 2018; Rasouli and Yu, 2020; Rajapaksha et al., 2020), it has not been previously introduced for model agnostic additive feature models such as SHAP. Our contribution is important as it provides a theoretical understanding of explanations of local model behaviour, which is often lacking in the explainable AI literature (Garreau and von Luxburg, 2020). A secondary contribution of this work is the analyses of how smoothing Shapley values can identify unstable feature attributions. While it is difficult to evaluate model explanations numerically, we provide an exhaustive comparison of different metrics (adversarial robustness, prediction accuracy, and visual inspection). Neighbourhood SHAP and Smoothed SHAP both merit consideration, as they have considerable advantages compared to standard KernelSHAP. For comparability across experiments, we limited our analysis to the use of the euclidean distance as a distance metric. In high dimensional spaces, this choice can be misleading (Domingos, 2012) and the use of more powerful distance metrics, such as one obtained by random forests, would be appropriate. We thus caution against exclusively relying on mathematical metrics for explaining models, and suggest comparing the un-weighted and weighted histograms before any judgement calls. While it can be difficult to choose an adequate bandwidth, we see that having control over kernel width allows the user to have a precise understanding of model predictions, both locally and at a larger scale. LIME or KernelSHAP in their default implementation do not allow for such a detailed analysis. Plots of Neighbourhood SHAP and Smoothed SHAP the bandwidth are thus powerful tools that give additional insight into oblique dynamics of the black box.
SG and LTM are students of the EPSRC CDT in Modern Statistics and Statistical Machine Learning (EP/S023151/1). SG receives funding from the Oxford Radcliffe Scholarship and Novartis. LTM receives funding from the EPSRC. KDO is funded by a Wellcome Trust/Royal Society Sir Henry Dale Fellowship 218554/Z/19/Z. CH is supported by The Alan Turing Institute, Health Data Research UK, the Medical Research Council UK, the EPSRC through the Bayes4Health programme Grant EP/R018561/1, and AI for Science and Government UK Research and Innovation (UKRI). We would like to thank Luke Merrick for his kind help.
- Aas et al. (2019) Aas, K., Jullum, M., and Løland, A. (2019). Explaining individual predictions when features are dependent: More accurate approximations to shapley values. arXiv preprint arXiv:1903.10464.
- Alvarez-Melis and Jaakkola (2018) Alvarez-Melis, D. and Jaakkola, T. S. (2018). On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049.
- Angwin et al. (2016) Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias. ProPublica, May, 23(2016):139–159.
- Asuncion and Newman (2007) Asuncion, A. and Newman, D. (2007). Uci machine learning repository.
- Bhargava et al. (2020) Bhargava, V., Couceiro, M., and Napoli, A. (2020). Limeout: An ensemble approach to improve process fairness. arXiv preprint arXiv:2006.10531.
- Bloniarz et al. (2016) Bloniarz, A., Talwalkar, A., Yu, B., and Wu, C. (2016). Supervised neighborhoods for distributed nonparametric regression. In Artificial Intelligence and Statistics, pages 1450–1459. PMLR.
- Botari et al. (2020) Botari, T., Hvilshøj, F., Izbicki, R., and de Carvalho, A. C. (2020). Melime: Meaningful local explanation for machine learning models. arXiv preprint arXiv:2009.05818.
- Botari et al. (2019) Botari, T., Izbicki, R., and de Carvalho, A. C. (2019). Local interpretation methods to machine learning using the domain of the feature space. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 241–252. Springer.
- Chen et al. (2020) Chen, H., Janizek, J. D., Lundberg, S., and Lee, S.-I. (2020). True to the model or true to the data? arXiv preprint arXiv:2006.16234.
- Covert et al. (2020) Covert, I., Lundberg, S., and Lee, S.-I. (2020). Understanding global feature contributions with additive importance measures. Advances in Neural Information Processing Systems, 33.
- Domingos (2012) Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10):78–87.
- Doucet et al. (2001) Doucet, A., De Freitas, N., and Gordon, N. (2001). An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pages 3–14. Springer.
Fong and Vedaldi (2017)
Fong, R. C. and Vedaldi, A. (2017).
Interpretable explanations of black boxes by meaningful perturbation.
Proceedings of the IEEE International Conference on Computer Vision, pages 3429–3437.
- Frye et al. (2020) Frye, C., de Mijolla, D., Cowton, L., Stanley, M., and Feige, I. (2020). Shapley-based explainability on the data manifold. arXiv preprint arXiv:2006.01272.
- Frye et al. (2019) Frye, C., Feige, I., and Rowat, C. (2019). Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. arXiv preprint arXiv:1910.06358.
- Gade et al. (2020) Gade, K., Geyik, S., Kenthapadi, K., Mithal, V., and Taly, A. (2020). Explainable ai in industry: Practical challenges and lessons learned. In Companion Proceedings of the Web Conference 2020, pages 303–304.
- Garreau and von Luxburg (2020) Garreau, D. and von Luxburg, U. (2020). Looking deeper into lime. arXiv preprint arXiv:2008.11092.
Ghorbani et al. (2019)
Ghorbani, A., Abid, A., and Zou, J. (2019).
Interpretation of neural networks is fragile.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681–3688.
- Guidotti et al. (2018) Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., and Giannotti, F. (2018). Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820.
- Hancox-Li (2020) Hancox-Li, L. (2020). Robustness in machine learning explanations: does it matter? In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 640–647.
- Holzinger et al. (2017) Holzinger, A., Biemann, C., Pattichis, C. S., and Kell, D. B. (2017). What do we need to build explainable ai systems for the medical domain? arXiv preprint arXiv:1712.09923.
- Janzing et al. (2020) Janzing, D., Minorics, L., and Blöbaum, P. (2020). Feature relevance quantification in explainable ai: A causal problem. In International Conference on Artificial Intelligence and Statistics, pages 2907–2916. PMLR.
- Lundberg and Lee (2017) Lundberg, S. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
- Merrick and Taly (2020) Merrick, L. and Taly, A. (2020). The explanation game: Explaining machine learning models using shapley values. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pages 17–38. Springer.
- Nadaraya (1964) Nadaraya, E. A. (1964). On estimating regression. Theory of Probability & Its Applications, 9(1):141–142.
- Rajapaksha et al. (2020) Rajapaksha, D., Bergmeir, C., and Buntine, W. (2020). Lormika: Local rule-based model interpretability with k-optimal associations. Information Sciences, 540:221–241.
- Rasouli and Yu (2019) Rasouli, P. and Yu, I. C. (2019). Meaningful data sampling for a faithful local explanation method. In International Conference on Intelligent Data Engineering and Automated Learning, pages 28–38. Springer.
- Rasouli and Yu (2020) Rasouli, P. and Yu, I. C. (2020). Explan: Explaining black-box classifiers using adaptive neighborhood generation. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–9. IEEE.
- Ribeiro et al. (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
- Robnik-Šikonja and Bohanec (2018) Robnik-Šikonja, M. and Bohanec, M. (2018). Perturbation-based explanations of prediction models. In Human and machine learning, pages 159–175. Springer.
- Roscher et al. (2020) Roscher, R., Bohn, B., Duarte, M. F., and Garcke, J. (2020). Explainable machine learning for scientific insights and discoveries. IEEE Access, 8:42200–42216.
- Ruppert and Wand (1994) Ruppert, D. and Wand, M. P. (1994). Multivariate locally weighted least squares regression. The annals of statistics, pages 1346–1370.
- Saito et al. (2020) Saito, S., Chua, E., Capel, N., and Hu, R. (2020). Improving lime robustness with smarter locality sampling. arXiv preprint arXiv:2006.12302.
- Slack et al. (2020) Slack, D., Hilgard, S., Jia, E., Singh, S., and Lakkaraju, H. (2020). Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 180–186.
- Smilkov et al. (2017) Smilkov, D., Thorat, N., Kim, B., Viégas, F., and Wattenberg, M. (2017). Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825.
- Sundararajan and Najmi (2020) Sundararajan, M. and Najmi, A. (2020). The many shapley values for model explanation. In International Conference on Machine Learning, pages 9269–9278. PMLR.
- Visani et al. (2020) Visani, G., Bagli, E., and Chesani, F. (2020). Optilime: Optimized lime explanations for diagnostic computer algorithms. arXiv preprint arXiv:2006.05714.
Watson, G. S. (1964).
Smooth regression analysis.Sankhyā: The Indian Journal of Statistics, Series A, pages 359–372.
- White and Garcez (2019) White, A. and Garcez, A. d. (2019). Measurable counterfactual local explanations for any classifier. arXiv preprint arXiv:1908.03020.
- Yeh et al. (2019) Yeh, C.-K., Hsieh, C.-Y., Suggala, A. S., Inouye, D. I., and Ravikumar, P. (2019). On the (in) fidelity and sensitivity for explanations. arXiv preprint arXiv:1901.09392.