1 Introduction
A very natural and effective form of communication is to first set the stage through high level general concepts and then only dive into more of the specifics [1]. In addition, the transition from high level concepts to more and more specific explanations should ideally be as logical or smooth as possible [2, 3]. For example, when you call a service provider there is usually an automated message trying categorize the problem at a high level followed by more specific questions until eventually if the issue isn’t resolved you might be connected to a human representative who can further delve into details from that point on. Another example is when you are presenting a topic one usually starts at a high level providing some background and motivation followed by more specifics. A third example is when you visit the doctor with an ailment, you first have to fill forms which capture information at various (higher) levels of granularity such as your family’s medical history followed by your personal medical history, after which a nurse may take your vitals and ask questions pertaining to your current situation. A doctor may perform further tests to pinpoint the problem. In all these cases, information or explanations you provide at multiple levels enables others to obtain insight that otherwise may be opaque. Recent work [4] has stressed the importance of having such multilevel explanations for successfully meeting the requirements of Europe’s General Data Protection Regulation (GDPR) [5]. They argue that simply having local or global explanations may not be sufficient for providing satisfactory explanations in many cases. In fact, even in the widely participated FICO explainability challenge [6] it was expected that one provides not just local explanations but also insights at the intermediate class level.
Given the omnipresence of multilevel explanations across various real world settings, we in this paper propose a novel model agnostic multilevel explanation (MAME) method that can take a local explainability technique such as LIME [7] along with a dataset and can generate multiple explanations for each of the examples corresponding to different degrees of cohesion (i.e. parameter tying) between (explanations of) the examples, where each such degree determines a level in our multilevel explanation tree and is explicitly controllable. At the extremes, the leaves would correspond to independent local explanations as would be the case using standard local explainability techniques (viz. LIME), while the root of the tree would correspond to practically a single explanation given the high degree of cohesion between all the explanations at this level. An illustration of this is seen in figure 1, where multilevel explanations generated by MAME for a real industrial pump failure dataset consisting of 2500 wells. We show three levels: the bottom level (four) leaves which correspond to example local explanations (amongst many), the top level corresponds to one global explanation and an intermediate level corresponds to explanations for two groups highlighted by MAME. The dotted lines indicate that the (explanation) nodes are descendants of the node above, but not direct children. Based on expert feedback these intermediate explanations although explain the same type of pump have different manufacturers resulting in some noticeable difference in behaviors. This is discussed in detail in Section 4.1. Also note that each level provides distinct enough information not subsumed by just local or global explanations, thus motivating the need for such multilevel explanations. Such explanations can thus be very insightful in identifying key characteristics that bind together different examples at various levels of granularity. Moreover, they can also provide exemplar based explanations looking at the groupings at specific levels (viz. different pumps by the same manufacturer).
Our method can also take into account side information such as similarity in explanations based on class labels or user specified groupings based on domain knowledge for a subset of examples. Moreover, one can also use nonlinear additive models going beyond LIME to generate local explanations. Our method thus provides a lot of flexibility in building multilevel explanations that can be customized apropos a specific application.
We prove that our method actually forms a tree in that examples merged in a particular level of the tree remain together at higher levels. The proposed fast approximate algorithm for obtaining multilevel explanations is proved to converge to the exact solution. We also validate the effectiveness of the proposed technique based on two human studies – one with experts and the other with nonexpert users – on real world datasets, and show that we produce high fidelity sparse explanations on several other public datasets.
2 Related Work
We now look at some important lines of work in explainable AI. The most traditional direction is to directly build interpretable models such as rule lists or decision sets [8, 9] on the original data itself so that no posthoc interpretability methods are required to uncover the logic behind the proposed actions. These methods however, may not readily give the best performance in many applications compared with a complex blackbox model. There are works which try to provide local [7, 10, 11, 12] as well as global explanations that are both feature and exemplar based [13]. Exemplar based explanations [14, 15] essentially identify few examples that are representative of a larger dataset. The previous work however, uses distinct models to provide the local and global explanations where again consistency between the models could potentially be an issue. Pedreschi et al. [16] propose a method that clusters local rulebased explanations to learn a local to global tree structure. This however suffers from the fact that the local explanations which are typically generated independently may not be consistent for nearby examples making the process of understanding and eventually trusting the underlying model challenging. TreeExplainer [17]
is another local explanation method specifically for trees based on game theory concepts. The authors present five new methods that combine many local explanations to provide global insight, which allows to retain local faithfulness to the model. This work also has similar limitations as
[16]. Tsang et al. [18] propose a hierarchical method to study the change in the behavior of interactions for a local explanation from an instance level to across the whole dataset. One major difference is that they do not fuse explanations as we do as they go higher up the tree. Moreover, their notion of hierarchy is based on learning higher order interactions between the prediction and input features, which is different from ours.Our approach has relations to convex clustering [19, 20], and its generalizations to multitask learning [21, 22]. However, our goal is completely different (multilevel posthoc explainability) and our methodology of computing and using local models that mimic blackbox predictions is also different.
3 Method
Let denote the inputoutput space and
a classification or regression function corresponding to a blackbox classifier. For any positive integer
let denote a function that acts coordinate wiseon any feature vector
. Thus, if is an identity map then we recover . However, could be nonlinear and we could apply different nonlinearities to different coordinates of . If is a parameter vector of dimension , then can be thought of as a generalized additive model which could be learned based on the predictions of for examples near thus providing a local explanation for given by . Let be the similarity between and. This can be estimated for a distance function
as . Let denote a dataset of size , where the output may or may not be known for each example. Let be the neighborhood of an example , i.e. examples that are highly similar to formally defined as for a close to 1. In practice, of size can be generated by randomly perturbing as done in previous works [7] times. Given this we can define the following optimization problem:(1) 
where are regularization parameters, are custom weights and is a set of for a given .
The first term in (1) tries to make the local models for each example to be as faithful as possible to the blackbox model, in the neighborhood of the example. The second term tries to keep each explanation sparse. The third term tries to group together explanations. This in conjunction with the first term has the effect of trying to make explanations of similar examples to be similar. Here we have the opportunity to inject domain knowledge by creating a prior knowledge graph with adjacency matrix . The edge weights can be set to high values for pairs of examples that we consider to have similar explanations, while setting zero weights for other pairs if we believe their explanations will be different.
We solve the above objective for different values of , wherein corresponds to the leaves of the tree, with each leaf representing a training example and its local explanation. At , (1) decouples to optimizations, corresponding to the LIME explanations. can be adaptively increased from resulting in progressive grouping of explanations (and hence the corresponding examples) forming higher levels of the tree. The grouping happens because and with a nonzero are encouraged to get closer as increases in (1). The intermediate levels hence correspond to disjoint clusters of examples with their representative explanations. The root of the tree obtained at a high value represents the global explanation for the entire dataset. Our formulation differs from Two Step (one of our baselines in Section 4) which does convex clustering on LIMEbased local explanations minimizing where are LIMEbased local explanations for instances. Two Step also results in a multilevel tree, although it does not explicitly ensure fidelity to the blackbox model predictions.
3.1 Optimization Details
We solve the optimization in (1) using ADMM [23] by posing (1) as
(2) 
The augmented Lagrangian with scaled dual variables is
(3) 
Here, , and are the auxiliary variables, and is the list of edges in the prior knowledge graph with nonzero weights. The columns of and are denoted by and respectively. corresponds to the same column in (), and acts on to encode differences in their columns. For example, the column of that encodes will contain at row and at row . and are the scaled dual variables. This reformulation is inspired by [20].
The ADMM iterations for a given value of are:
(4)  
(5)  
(6)  
(7)  
(8) 
Since (4)(8) should be solved for progressively increasing values of , we adopt the idea of Algorithmic Regularization (AR) [20] to run (4)(8) only once for each value of , and warmstart the next set of ADMM iterations with the estimate for the previous value. The values are obtained by initializing to a small and multiplying it by a step size for the next . We will denote these approximate solutions as , where corresponds to the index of the set of values. The detailed algorithm for obtaining the multilevel tree using MAME is described in Algorithm 1.
The exact solution where (4)(8) are run until convergence for each value, will be denoted as . We show that the approximate solution converges to the exact one in the following sense, which is proved in the appendix. Note that the Theorem holds true for the approximate solutions without the postprocessing step (v) in Algorithm 1.
Theorem 3.1.
As , where is the multiplicative stepsize update and is the initial regularization level, the sequence of ARbased primal solutions , and the sequence of exact primal solutions converge in the following sense.
(9) 
The approximation quality and timing comparisons between the ARbased and exact methods are in the appendix.
We now show that our method actually forms a tree in that explanations of examples that are close together at lower levels will remain at least equally close at higher levels.
Lemma 3.2 (Nonexpansive map for exact solutions).
If are regularization parameters for the last term in (1) for consecutive levels in our multilevel explanations where is the lowest level with and denoting the (globally) optimal coefficient vectors (or explanations) for and respectively corresponding to level , then for and we have
The proof of this lemma is available in the appendix.
4 Experiments
We now evaluate our method based on three different scenarios. The first is a case study involving human experts in the Oil & Gas industry. Insights provided by MAME were semantically meaningful to the experts both when they did and did not provide side information. Second, we conducted a user study with data scientists based on a public loan approval dataset who were not experts in finance. We found that our method was significantly better at providing insight to these nonexperts compared with Two Step – which is hierarchical convex clustering [19, 20] where a median explanation is computed for each cluster – and Submodular Pick LIME (SPLIME) [7]. Data scientists are the right catcher for our method as a recent study [24] claims that explanations from AI based systems in most organizations are first ingested by data scientists. Third, we show quantitative benefit of our method in terms of two measures defined in Section 4.3. The first measure, Generalized Fidelity, quantifies how well we can predict the class for a test point using the explanation or feature importances of the closest training point and averaging this value over all test instances at different levels of aggregation. We find here too that our method is especially good when we have to explain the dataset based on few explanations, which is anyways the most desirable for human consumption. The second measure, Feature importance rank correlation, shows the correlation between the ranks of feature importances between the blackbox model and the explanation method. We observe that our proposed MAME method was superior to Two Step and SPLIME in identifying the important features.
4.1 Oil & Gas Industry Dataset
We perform a case study with a realworld industrial pump failure dataset (classification dataset) from the Oil & Gas industry. The pump failure dataset consists of sensor readings acquired from 2500 oil wells over a period of four years that contains pumps to push out oil. These sensor readings consist of measurements such as speed, torque, casing pressure (CPRESS), production (ROCURRRATE) and efficiency (ROCEFF) for the well along with the type of failure category diagnosed by the engineer. In this dataset, there are three major failure modes: Worn, Reservoir and Pump Fit. Worn implies the pump is worn out because of aging. Reservoir implies the pump has a physical defect. Pump Fit implies there is sand in the pump. From a semantics perspective, there can be seven different types of pumps in a well which can be manufactured separately by fourteen different vendors. We are primarily interested in modeling reservoir failures as they are the more difficult class of failures to identify. The blackbox classifier used was a 7layer multilayer perceptron (MLP) with parameter settings recommended by scikitlearn MLPClassifier. The dataset had 5000 instances 75% of which were used to train the model and the remaining 25% was test.
We conducted two types of studies here. One where we obtained explanations without any side information from the experts and the other where we were told certain groups of pumps that should exhibit similar behavior and hence should have similar explanations for their outputs.
Study 1 Expert Evaluation: In this study, we built the MAME tree on the training instances. We picked a level which had 4 clusters guided by expert input given the dataset had 4 prominent pump manufacturer types. In Figure 1, in the introduction we show the root and example leaves and the level with 4 clusters (level 380), where two of these clusters are shown that the expert felt were semantically meaningful. The expert said that the two clusters had semantic meaning in that, although both clusters predominantly contained the same type of Progressive Cavity pump (main pump type of interest), they were produced by different manufacturers and hence, had somewhat different behaviors. Two of the manufacturers were known to have better producing pumps in general which corresponds to the explanation on the right compared to that on the left which had pumps from all other mediocre producing manufacturers, which was consistent with onfield observations. Hence, the result uncovered by MAME without any additional semantic information gave the expert more trust in the model that was built.
Study 2 Expert Evaluation with Side Information: In this study, the expert provided us a grouping of pumps that should have similar explanations. Given the flexibility of MAME we were able to incorporate this knowledge () (see Eqn. (1)) into our optimization.
As explained above, based on expert input, we picked the level in the tree which had four clusters. Two of these clusters were small and had a mixed set of pumps. However, two of the bigger homogeneous clusters shown in Figure 2 were of interest to the expert. The expert provided the following insights: nonproducing (Welldown category) pumps are more likely to be used in a runtofailure scheme where they are used rigorously (i.e. at higher speed, production and casing pressure) than they would be run regularly which explains why these factors impact reservoir type failures here.
Producing pumps on the other hand keep producing oil adequately for longer periods while operating at optimal efficiency (slightly lower than max efficiency). Also operating these producing pumps with lower torque (caused by the helical rotor in the pump) can elongate their operational lifetime and reduce likelihood for imminent failure.
These insights are consistent with our explanations and validates the fact that our method is able to effectively model the provided side information.
4.2 User Study
To further evaluate MAME’s ability to accurately capture highlevel concepts for a given domain, we conducted another user study with 30 data scientist from three different technology companies. In contrast to the oil and gas case study, these data scientists are not domain experts in the target task of the study (approving loans). We hypothesized that by showing the explanations for the highlevel clusters found in a dataset, nondomainexperts can very quickly learn the critical relationships between predictors and outcome variables and start to make accurate predictions. Furthermore, we hypothesized that MAME produces better highlevel explanations than the aforementioned Two Step method and the SPLIME method [7], which incrementally picks representative as well as diverse explanations based on a submodular objective. To evaluate this hypothesis, we created three conditions each with a different explanation method, and randomly assigned 10 participants to each condition. Example screenshots of the web based trials are provided in the appendix.
In the study, we asked the participants to play the role of a loan officer who needs to decide whether to approve or reject a person’s loan request based on that person’s financial features. We used the HELOC dataset [6]
as the basis for this task. The dataset contained 23 predictors, all of them were continuous variables about a person’s financial record such as the number of loanpayment delinquencies, percentage of revolving balance, etc. One predictor, external credit score, was removed since it was essentially a summary of other variables and accounted for much of the variability in the outcome variable. The outcome variable was binary, with 0 indicating default on a loan and 1 indicating full repayment on a loan. 75% of the dataset was used for training a random forest model with 100 trees, while the remaining was used to randomly pick instances for our user study. Based on the training set MAME, SPLIME and Two Step were made to produce four representative explanations that would characterize the whole dataset and consequently partition it into four clusters. We chose the number four because we did not want to overwhelm the participants with too many explanations/clusters, and because we saw that the data could be partitioned into four types of applicants with each type having enough representation.
A single trial of the task proceeded as follows. First, the highlevel explanation and the 22 financial features of a loan applicant were shown to the participant. Figure 3
shows the explanation generated using MAME. The top left graph shows that the classifier groups loan applicants into four clusters (yaxis) based on several key features (xaxis). The color of the squares indicates the average contribution of a feature to the classifier’s prediction for a cluster, with orange colors indicating increasing probability of full repayment, whereas blue indicating decreasing probability. The numbers in the colored squares show the average feature value for that cluster. The top right graph shows the average of the classifier’s predicted probability of repayment for loan applicants of each cluster. The bottom graph shows the overall importance of the features used in the top left graph. The longer the bar, the more important it is to the classifier. The same set of visual explanations was used throughout each condition as it does not change from trial to trial. To maintain the same level of explanation complexity across conditions, in the SPLIME and Two Step conditions, we also used the algorithms to find four highlevel clusters from the training data. However, the important features and the repayment probability of the clusters are substantially different from the MAME explanation.
After showing the necessary information, the participant was asked two questions for each trial. First, they were asked which of the four clusters displayed in the explanation graph is most similar to the loan applicant; second, they were asked to indicate their guess of the classifier’s predicted repayment probability for the applicant on a scale of 0 to 100% at the interval of 5%.
Each participant completed 10 trials, which are randomly sampled from the test set. Their results are summarized in Figure 4. The left graph shows the mean squared error (MSE) between the classifier’s probability prediction and the participant’s guess of that prediction. MAME outperformed the other two methods since the participants had much lower MSE in this condition, averaging around just . The right graph shows the number of trials (out of 10) in which the participant chose the correct mostsimilar cluster. The ground truth for this question was determined using nearest neighbor on the feature contribution of the given applicant and that of each cluster. As can be seen, MAME again outperformed other methods substantially. Participants in this condition were able to correctly find the most similar cluster for 7 out of 10 trials, while those in other conditions found on average less than 3 correct clusters. Both sets of results suggest that MAME produced more accurate highlevel clusters than Two Step and SPLIME methods. In addition, the very low squared difference suggests that nonexperts can use MAME’s highlevel explanations to quickly understand how the model works and make reasonable predictions.
Dataset  LIME  Two Step  MAME 

Auto MPG  0.4800  0.4667  0.5267 
Retention  0.5158  0.5368  0.5474 
HELOC  0.6623  0.6537  0.6883 
Waveform  0.1905  0.5333  0.2952 
ATIS  0.6976  0.7089  0.7547 
4.3 Quantitative Evaluation
Quantitative Metrics: We wish to primarily answer two questions quantitatively: (i) How trustworthy the learned local explanation models are at different levels in the tree? (ii) How faithful are the explanation models to the blackbox that they explain?
a) Generalized Fidelity: Here, for every example in the test set we find the closest example based on euclidean distance (nearest neighbor) in the set that the explanation models were built and use that explanation as a proxy for the test example. We then compute the of the prediction from the linear explanation model with respect to the blackbox model’s prediction. If this is high, it implies that the local models at that particular level capture the behavior of the blackbox model accurately and thus can be trusted. For our method and Two Step we do this for each level in the tree and compare the levels that have the same number of groups/clusters as the depth of the trees may vary. We also compute this measure for SPLIME by varying the number of representative explanations from to the total number of training examples. The closest training example for each test point in this case is chosen from the list of representatives picked by SPLIME, and the representative linear explanation models are used to compute the . We remark that the nearest neighbor association is meaningful since LIME explanations (which also form the basis for Two Step and MAME) are optimized to work well in the neighborhood of each training data point.
b) Feature importance rank correlation: We compute the feature importances of coefficients with LIME, Two Step and MAME methods. For LIME, the importance score of a feature is defined as , where is the number of samples [7]. For Two Step and MAME, we define the importance score similarly by including all the levels as where is the number of levels. We then rank the features in descending order of importance scores, and compute rank correlations after similarly ranking the blackbox model feature importances. Note that this comparison can be performed only for blackbox models that can output feature importance scores.
4.4 Evaluation on Public Datasets
We demonstrate the proposed methods using the quantitative metrics with several publicly available datasets: Auto MPG [25], Retention^{1}^{1}1We generated the data using code in https://github.com/IBM/AIX360/blob/master/aix360/data/ted_data/GenerateData.py [26], Home Line Equity Line of Credit (HELOC) [6], Waveform [25], and Airline Travel Information System (ATIS) datasets.
The Auto MPG dataset has a continuous outcome variable whereas the rest are discrete outcomes. The (number of examples, number of features) in the datasets are: Auto MPG (392, 7), Retention (1200, 8), HELOC (1000, 22), Waveform (5000, 21), and ATIS (5871, 128). We used randomly chosen 75% of the data for training the blackbox model and the explanation methods, and the rest 25% for testing. The blackbox models trained are Random Forest (RF) Regressor/Classifier, and MultiLayer Perceptron (MLP) Regressor/Classifier. Only the Auto MPG dataset used a regression blackbox, while rest used classification blackbox models. The RF models used between 100 and 500 trees, whereas the MLP models have either 3 or 4 hidden layers with 20 to 200 units per layer. The labels for the explanation models are the predictions of the blackbox models. For regression blackboxes, these are directly the predictions, whereas for classification blackboxes, these are the predicted probabilities for a specified class.
When running LIME and MAME methods, the neighborhood size in (1) is set to , neighborhood weights are set using a Gaussian kernel on with a automatically tuned width, and the values in (1) are set to provide explanations with nonzero values when .
For Two Step and MAME, the labels are sorted and the (see Section 3) are set to whenever and are right next to each other in this sorted list, else . This enforces the simple prior knowledge that explanations for similar blackbox model predictions must be similar. This prior provided good results with our datasets. For both Two Step and MAME, we used postprocessed explanations (see Section 3). More details on the datasets, the blackbox model parameters and performances, the classes chosen for explanation, and the hyperparameters for the explanation models are available in the appendix.
The generalized fidelity measures for the datasets for MLP blackboxes is given in Figures 4(a)  4(e). The MAME method outperforms Two Step and SPLIME for small number of clusters (which is more important from an explainability standpoint) for the classification datasets. For Auto MPG, Two Step and MAME perform similarly and outperform SPLIME. However, this dataset is the simplest of the lot. For ATIS, SPLIME took an inordinately long time, so we ran it only for 1 to 1000 representative explanations. Generalized fidelity for RF blackboxes are provided in the appendix.
The feature importance ranks with respect to RF blackboxes are provided in Table 1 for the various datasets. Except for Waveform, MAME’s feature importance ranks are closer to that of the blackbox model. Since feature importances are not output by MLP models, we do not consider those for this experiment.
5 Conclusion
In this paper, we have provided a meta explanability approach that can take a local explanability method such as LIME and produce a multilevel explanation tree by jointly learning the explanations with different degrees of cohesion, while at the same time being fidel to the blackbox model. We have argued based on recent works as well as through expert and nonexpert user studies that such explanations can bring additional insight not conveyed readily by just global or local explanations. We have also shown that when one desires few explanations to understand the model and the data as typically would be the case, our method creates much more fidel explanations compared with other methods. We have also made our algorithm scalable by proposing principled approximations to optimize the objective.
Our current method can build multilevel explanations based on linear or nonlinear local explanability methods that have a clear parametrization. In the future, it would be interesting to extend the method to other nonparametric local explanation methods such as contrastive/counterfactual methods. A key there would be to somehow ensure that the perturbations for individual instances are to some degree common across instances that would be grouped together. From an HCI perspective, it would be interesting to see if such a tree facilitates back and forth communication where a user may adaptively want more and more granular explanations.
References
 [1] T. Pettinger, “10 Tips for Effective Conversation,” in http://www.srichinmoybio.co.uk/blog/communication/10tipsforeffectiveconversation/, 2008.
 [2] R. M. Joseph Grenny, Al Switzler, Crucial Conversations: Tools for Talking When Stakes Are High. McGrawHill Education, 2001.
 [3] P. Lipton, “What Good is an Explanation?” Studies in Epistemology, Logic, Methodology, and Philosophy of Science, vol. 302, 2001.
 [4] M. E. Kaminski and G. Malgieri, “Multilayered explanations from algorithmic impact assessments in the GDPR,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020.
 [5] P. N. Yannella and O. Kagan, “Analysis: Article 29 Working Party Guidelines on Automated Decision Making Under GDPR,” 2018, https://www.cyberadviserblog.com/2018/01/analysisarticle29workingpartyguidelinesonautomateddecisionmakingundergdpr/.

[6]
FICO, “Explainable Machine Learning Challenge,”
https://community.fico.com/s/explainablemachinelearningchallenge?tabset3158a=2, 2018, accessed: 20181025.  [7] M. Ribeiro, S. Singh, and C. Guestrin, “"Why Should I Trust You?" Explaining the Predictions of Any Classifier,” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.
 [8] F. Wang and C. Rudin, “Falling rule lists,” in In AISTATS, 2015.
 [9] G. Su, D. Wei, K. Varshney, and D. Malioutov, “Interpretable Twolevel Boolean Rule Learning for Classification,” in https://arxiv.org/abs/1606.05798, 2016.

[10]
G. Montavon, W. Samek, and K.R. Müller, “Methods for interpreting and understanding deep neural networks,”
Digital Signal Processing, 2017.  [11] S. Lundberg and S.I. Lee, “Unified framework for interpretable methods,” in Advances of Neural Inf. Proc. Systems, 2017.
 [12] A. Dhurandhar, P.Y. Chen, R. Luss, C.C. Tu, P. Ting, K. Shanmugam, and P. Das, “Explanations based on the missing: Towards contrastive explanations with pertinent negatives,” in Advances in Neural Information Processing Systems 31, 2018.
 [13] G. Plumb, D. Molitor, and A. S. Talwalkar, “Model agnostic supervised local explanations,” in Advances in Neural Information Processing Systems 31, 2018.
 [14] K. Gurumoorthy, A. Dhurandhar, G. Cecchi, and C. Aggarwal, “Efficient Data Representation by Selecting Prototypes with Importance Weights,” IEEE International Conference on Data Mining, 2019.
 [15] Been Kim and Rajiv Khanna and Oluwasanmi Koyejo, “Examples are not enough, learn to criticize! criticism for interpretability,” in In Advances of Neural Inf. Proc. Systems, 2016.

[16]
D. Pedreschi, F. Giannotti, R. Guidotti, A. Monreale, S. Ruggieri, and
F. Turini, “Meaningful explanations of Black Box AI decision systems,” in
Proceedings of the AAAI Conference on Artificial Intelligence
, vol. 33, 2019, pp. 9780–9784.  [17] S. Lundberg, G. Erion, H. Chen, D. A, J. M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S. I. Lee, “From local explanations to global understanding with explainable AI for trees,” in Nature Machine Intelligence, vol. 2, 2020, pp. 56––67.
 [18] M. Tsang, Y. Sun, D. Ren, and Y. Liu, “Can I trust you more? Modelagnostic hierarchical explanations,” arXiv preprint arXiv:1812.04801, 2018.

[19]
G. K. Chen, E. C. Chi, J. M. O. Ranola, and K. Lange, “Convex clustering: An attractive alternative to hierarchical clustering,”
PLoS computational biology, vol. 11, no. 5, 2015.  [20] M. Weylandt, J. Nagorski, and G. I. Allen, “Dynamic Visualization and Fast Computation for Convex Clustering via Algorithmic Regularization,” Journal of Computational and Graphical Statistics, pp. 1–18, 2019.
 [21] M. Yu, K. N. Ramamurthy, A. Thompson, and A. Lozano, “Simultaneous Parameter Learning and BiClustering for MultiResponse Models,” Frontiers in Big Data, vol. 2, p. 27, 2019.
 [22] M. Yu, A. M. Thompson, K. N. Ramamurthy, E. Yang, and A. C. Lozano, “Multitask Learning using Task Clustering with Applications to Predictive Modeling and GWAS of Plant Varieties,” arXiv preprint arXiv:1710.01788, 2017.
 [23] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1–122, 2011.
 [24] U. Bhatt, A. Xiang, S. Sharma, A. Weller, Y. J. Ankur Taly, J. Ghosh, R. Puri, J. M. F. Moura, and P. Eckersley, “Efficient Data Representation by Selecting Prototypes with Importance Weights,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020.
 [25] D. Dheeru and E. Karra Taniskidou, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
 [26] V. Arya, R. K. E. Bellamy, P.Y. Chen, A. Dhurandhar, M. Hind, S. C. Hoffman, S. Houde, Q. V. Liao, R. Luss, A. Mojsilović, S. Mourad, P. Pedemonte, R. Raghavendra, J. Richards, P. Sattigeri, K. Shanmugam, M. Singh, K. R. Varshney, D. Wei, and Y. Zhang, “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques,” Sept. 2019. [Online]. Available: https://arxiv.org/abs/1909.03012
 [27] M. Hong and Z.Q. Luo, “On the linear convergence of the alternating direction method of multipliers,” Mathematical Programming, vol. 162, no. 12, pp. 165–199, 2017.
 [28] R. T. Rockafellar, Convex analysis. Princeton university press, 1970, vol. 28.
Appendix A Additional Details on Public Datasets
Auto MPG:
This dataset is obtained from https://archive.ics.uci.edu/ml/datasets/Auto+MPG. The features correspond to the various attributes of a model of a car, and the outcome is miles per gallon. This is the only regression dataset used.
Retention:
This is a synthetic dataset that is generated using https://github.com/IBM/AIX360/blob/master/aix360/data/ted_data/GenerateData.py. The features correspond to job position, organization, performance, compensation, and tenure of an employee and the outcome is whether the employee will leave the organization (1) or not (0). We choose to explain class 1.
Heloc:
This dataset is obtained from https://community.fico.com/s/explainablemachinelearningchallenge?tabset3158a=2. The HELOC dataset had about 10000 instances, and some instances were basically empty and were excluded. From the rest, we used only the first 1000 instances both in the user study (Sections 4.2 and 4.4) in order to speed up explanation generation. The dataset had two labels  0 indicating default on loan, and 1 indicating full repayment. We choose to explain class 1. The features correspond to financial health of people who apply for a loan.
Waveform:
The dataset was obtained from https://archive.ics.uci.edu/ml/datasets/Waveform+Database+Generator+(Version+1). The outcome variable is 3 classes of waves (0, 1, 2) and the features are noisy combinations of 2 out of 3 base waves. We choose to explain class 0.
Atis:
This dataset contains short texts along with 26 intents (labels) associated to each text, in a air travel information system. This dataset was obtained from https://www.kaggle.com/siddhadev/mscntkatis and processed using the code provided in https://www.kaggle.com/siddhadev/atisdatasetfrommscntk
. We used the slot filling IOB labels as input features after binarization  any value greater than 0 will be coded as 1. We also removed the last feature (O). The dataset is used for intent classification
intents and we choose to explain the class flight. For the purposes of evaluation, we merged the original train and test partitions, and then create train and test partitions using random sampling as described in the main paper.Appendix B (Hyper)Parameter Settings in Experiments
Evaluation on Public Datasets and the User Study:
The neighborhood size was set to when running LIME to generate the leaf explanations. We found this size to be reasonable since larger sizes will make the codes run slower. We also found the default kernel width setting in LIME () worked well, so we used that. The number of nonzero coefficients in an explanation (a.k.a. explanation complexity) was set at 5, since we found it to be a good number for users to digest multiple explanations. The public datasets were evaluated with only one random split. The training partitions were used to train the black box model and the explanation methods. Results were generated using the test partition.
When running MAME and Two Step, we set the multiplicative stepsize to , the initial regularization level for , . We used conjugate gradient iterations when solving for in (4).
Evaluation for the Expert Study:
The neighborhood size was set to here. Side information (refer Section 4.1 Study 2 in the main paper) from expert connecting pumps with the same manufacturer type consisted of 2155 edges. The explanation complexity was set at 3 (Figure 2). Figure 1 does not encode any side information and the complexity was set at 4. There were a total of 94 levels in the tree in Figure 1. The remaining parameters were set as mentioned above for the public datasets.
Appendix C Computing Infrastructure
We ran the codes in an Ubuntu machine with 64 GB RAM and 32 cores. For the ATIS dataset alone, we used a machine with 250 GB RAM to avoid memory issues. Both MAME and Two Step were implemented without explicit parallelization, and the parallel operations only happened implicitly in Linear Algebra libraries. The codes were written in Julia 1.3 and also utilized some Python 3.7 libraries (e.g., for RF and MLP model building).
Appendix D Note on Two Step method Implementation
Appendix E Additional Evaluation on Public Datasets
The generalized fidelity measures for the datasets for RF blackboxes is given in Figures 5(a)  5(e). Compared to the MLP blackboxes, the first observation we make is that the is much better for RF blackboxes. It seems like MLP models have decision surfaces that are much harder to approximate using linear fits compared to RF models. With RF blackboxes, in the ATIS dataset which is fully categorical, MAME still performs better than Two Step and SPLIME at least for small number of clusters. Similar behavior holds for Retention data, which has a lot of categorical features. However, the behavior with HELOC and Waveform datasets that have only numerical features is mixed. In Auto MPG, the results are mixed as well even though it contains categorical features, but again this is a much simpler dataset. A plausible explanation is that when the decision surface of the blackbox model is simpler (such as in RF models), just choosing diverse explanations using a method like SPLIME is sufficient when the features are fully numerical. However, when the features are categorical or when the decision surfaces are complex (such as in MLP models), a more sophisticated explanation method gives better fidelity. Note that for ATIS, SPLIME took an inordinately long time, so we ran it only for 1 to 1000 representative explanations.
Appendix F Approximation Quality and Timing Comparisons between the Exact and ARbased methods
In order to demonstrate the approximation quality between Exact and ARbased solutions, we plot the approximate quality and timing between the two methods. Note that the exact method runs the ADMM iterations in (4)(8) several times for each value until convergence, whereas the ARbased method runs the iterations only once for each value.
We use the Auto MPG dataset (both train and test partition together) to obtain a RF regressor black box model and train MAME trees. We set and choose from . We therefore run the exact and ARbased methods 7 times each, one for each value of . Both the exact and ARbased methods are warmstarted using the solution for previous value of . The approximation quality is measure by a normalized version of the measure given in Theorem 3.1. The normalization factor that divides the measure is where . Note that the superscript indicates that the explanations belong to the leaf nodes.
From Figure 7, we see that the exact and approximate solutions get closer as as predicted by the theory. We also see from Figure 8 that the ARbased solution is around 10 times faster to compute than the exact solution for all values. Both these results demonstrate the utility of the ARbased approximate method.
Appendix G Proof Sketch of Lemma 3.1
Proof Sketch.
If denotes the objective in equation 1 being optimized at level , then , where . We know that by design and so we have an added penalty.
If at the optimal of level , for some and , then that would imply that the other two terms in the objective reduce enough to compensate for the added penalty. However, this would imply that and were not the optimal solution at level as the current solution would be better given the lesser emphasis on the last term (i.e. lesser ) at that level. This contradicts our assumption. ∎
Appendix H Linear Convergence of ADMM for MAME
Strong convexity of the objective function ensures linear convergence of the ADMM method in practice. However, in the absence of strong convexity certain additional criteria need to be satisfied to have linear convergence. Hong and Luo in their paper [27] established the global linear convergence of ADMM for minimizing the sum of any number of convex separable functions expressed in the form below.
(9)  
The major assumptions imposed by the authors in their paper to prove the linear convergence for the function are the following

Each can further be decomposed as = + where and are both convex and continuous over their domains.

Each is strictly convex and continuously differentiable with a uniform Lipschitz continuous gradient.

Each satisfies either one of the following conditions

The epigraph of is a polyhedral set.

where is a partition of with being the partition index.

Each is the sum of the functions described in the previous two terms.


For any fixed and finite y and , is finite for all .

Each submatrix has full column rank.

The feasible sets are compact polydedral sets.
In Equation 9, each is a convex function subject to linear equality constraints. Our original MAME problem in Equation 2 can be written in the form below as in Equation 10, which satisfies all the necessary criteria specified above from 16. For example, it is known that the epigraph of the norm is a polydedral set. The identity matrix and the difference matrix satisfy the full column rank condition also.
Hong and Luo [27] additionally mentions that each may only consist of the convex nonsmooth function and the strongly convex part can be absent. This helps us ensure that the two sparsity inducing norm terms in our formulation on and do not violate any of the conditions stated. To understand the proof of convergence, we recommend the readers to go through the proof provided in Hong and Luo [27]. This completes the proof for linear convergence of ADMM for MAME.
(10)  
Appendix I Proof of Theorem 3.1 (from main paper)
In this section we prove Theorem 3.1 on the expectation of difference between exact and AR solutions. We begin with 3 technical lemmas based on the lemmas given in [20] which maybe of independent interest: Lemma I.1 provides a convergence rate for the optimization step embedded within an iteration; Lemma I.2 establishes a form of Lipschitz continuity for convex clustering regularization paths; Lemma I.3 provides a global bound for the approximation error induced at any iteration.
Lemma I.1 (QLinear Error Decrease).
At each iteration , the approximation error decreases by a factor not depending on or . That is,
for some strictly less than 1 where is the computed regularization parameter on the AR path ().
Proof.
In the notation of [27], the constraint matrix for MAME problem from Equation 10 is given by , for appropriately sized identity matrices, which is clearly rowindependent (one of the assumptions mentioned in Section H), yielding linear convergence of the primal and dual variables at a rate which may depend on . This follows from the proof sketch for the linear convergence of ADMM for MAME which has been provided in Section H where we show how our formulation satisfies all the assumptions stated in [27]. Taking , we observe that the MAME iterates are uniformly Qlinearly convergent at a rate . ∎
Lemma I.2 (Lipschitz Continuity of Solution Path).
is Lipschitz with respect to . That is,
for some .
Proof.
We first show that is Lipschitz. The vectorized version of MAME problem can be written as
where , , and are convex functions, and is a fixed matrix. The KKT conditions give
where , are the subdifferential of and . Since both and are convex, it is differentiable almost everywhere [Theorem 25.5] [28], so the following holds for almost all :
Differentiating with respect to , we obtain
Note that depends on
so the chain rule must be used here. From here, we note
For the MAME problem, we recall that and are convex norms and hence have bounded gradients; hence and are bounded so the gradient of the regularization path is bounded and exists almost everywhere. This implies that the regularization path is piecewise Lipschitz. Since the solution path is constant for and is continuous, the solution path is globally Lipschitz with a Lipschitz modulus equal to the maximum of the piecewise Lipschitz moduli. ∎
Lemma I.3 (Global Error Bound).
The following error bound holds for all :
Proof.
Next, at , we note that
by Lemma I.1. We now use the triangle inequality to split the right hand side:
From above, we have . Using Lemma I.2, RHS2 can be bounded by
Putting these together, we get
Repeating this argument for , we see
We use this as a base case for our inductive proof and prove the general case:
With these results, we are now ready to prove Theorem 3.1 ∎
Proof.
We begin by fixing temporarily and bounding
The infimum over all is less than the distance at any particular , so it suffices to choose a value of which gives convergence to 0. Let be the value of which gives the closest value of to along the AR path; and let . That is,
Then
Using Lemma I.2, we can bound RHS2 as
RHS2  
Using Lemma I.3, we can bound RHS1 as
RHS1  
where is large but finite. Since and , we can replace the dependent quantities to get
Putting these together, we have
Similarly by fixing we know that
where
Using similar arguments as above based on Lemma I.3 we can show that
We can use these two bounds and plug them in here
One can observe that as , to (1, 0) both the expectation terms reduce to 0 individually. Hence proved ∎
Appendix J User Study Material
Figure 9 is a screenshot of the user study in the LIME condition. The visual explanations show the feature contribution, likelihood of repayment, and feature importance calculated using the SPLIME method. Below the explanation is a table showing the 22 financial records for a loan applicant. It is divided into two segments. The top segment shows the features used in the visual explanation, which are also the most important features deemed by the explanation algorithm. The bottom segment shows the rest of the features. At the bottom of the page are two questions that the participant had to answer for every trial. The participant could access a written instruction any time during the experiment by clicking the "SEE INSTRUCTIONS" button at the top of the web page.
Appendix K Computational Complexity Analysis
We provide the computational complexity for running one set of iterations given in (4)(8). This is the same as obtaining solutions for one value if we use the ARbased method. Let us consider the five steps individually.
For (4), we use conjugate gradients to obtain the solution. If we assume the number of edges , and the number of CG iterations to be , the dominant complexity of this step is . For (5), the update involves a softthresholding step which incurs a complexity of . The update step (6), similarly incurs a complexity of , assuming . Updates for and in (7) and (8) respectively involve complexities of each. If we assume to be very small (we use in our experiments), the dominant complexity for one ADMM iteration is hence .
Comments
There are no comments yet.