Log In Sign Up

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prior work on optimizing models for interpretability has relied on easy-to-quantify proxies for interpretability, such as sparsity or the number of operations required. In this work, we optimize for interpretability by directly including humans in the optimization loop. We develop an algorithm that minimizes the number of user studies to find models that are both predictive and interpretable and demonstrate our approach on several data sets. Our human subjects results show trends towards different proxy notions of interpretability on different datasets, which suggests that different proxies are preferred on different tasks.


Making Bayesian Predictive Models Interpretable: A Decision Theoretic Approach

A salient approach to interpretable machine learning is to restrict mode...

The Mythos of Model Interpretability

Supervised machine learning models boast remarkable predictive capabilit...

A psychophysics approach for quantitative comparison of interpretable computer vision models

The field of transparent Machine Learning (ML) has contributed many nove...

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

Existing approaches for the design of interpretable agent behavior consi...

Interpretable Models of Human Interaction in Immersive Simulation Settings

Immersive simulations are increasingly used for teaching and training in...

MDL-motivated compression of GLM ensembles increases interpretability and retains predictive power

Over the years, ensemble methods have become a staple of machine learnin...

TIP: Typifying the Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...

1 Introduction

Understanding machine learning models can help people discover confounders in their training data, and dangerous associations or new scientific insights learned by their models

(Caruana et al., 2015; Freitas, 2014; Lipton, 2016). This means that we can encourage the models we learn to be safer and more useful to us by effectively incorporating interpretability into our training objectives. But interpretability depends on both the subjective experience of human users and the downstream application, which makes it difficult to incorporate into computational learning methods.

Human-interpretability can be achieved by learning models that are inherently easier to explain or by developing more sophisticated explanation methods; we focus on the first problem. This can be solved with one of two broad approaches. The first defines

certain classes of models as inherently interpretable. Well known examples include decision trees

(Freitas, 2014), generalized additive models (Caruana et al., 2015), and decision sets (Lakkaraju et al., 2016). The second approach identifies some proxy that (presumably) makes a model interpretable and then optimizes that proxy. Examples of this second approach include optimizing linear models to be sparse (Tibshirani, 1996), optimizing functions to be monotone (Altendorf et al., 2005)

, or optimizing neural networks to be easily explained by decision trees

(Wu et al., 2018).

In many cases, the optimization of a property can be viewed as placing a prior over models and solving for a MAP solution of the following form:


where is a family of models, is the data, is the likelihood, and

is a prior on the model that encourages it to share some aspect of our inductive biases. Two examples of biases include the interpretation of the L1 penalty on logistic regression as a Laplace prior on the weights and the class of norms described in

Bach (2010) that induce various kinds of structured sparsity. Generally, if we have a functional form for , we can apply a variety of optimization techniques to find the MAP solution. Placing an interpretability bias on a class of models through allows us to search for interpretable models in more expressive function classes.

Optimizing for interpretability in this way relies heavily on the assumption that we can quantify the subjective notion of human interpretability with some functional form . Specifying this functional form might be quite challenging. In this work, we directlyestimate the interpretability prior from human-subject feedback. Optimizing this more direct measure of interpretability can give us models more suited to a task at hand than more accurately optimizing an imperfect proxy.

Since measuring for each model has a high cost—requiring a user study—we develop a cost-effective approach that initially identifies models with high likelihood , then uses model-based optimization to identify an approximate MAP solution from that set with few queries to . We find that different proxies for interpretability prefer different models, and that our approach can optimize all of these proxies. Our human subjects results suggest that we can optimize for human-interpretability preferences.

2 Related Work

Learning interpretable models with proxies

Many approaches to learning interpretable models optimize proxies that can be computed directly from the model. Examples include decision tree depth (Freitas, 2014), number of integer regression coefficients (Ustun and Rudin, 2017), amount of overlap between decision rules (Lakkaraju et al., 2016), and different kinds of sparsity penalties in neural networks (Hinton, 2010; Ross et al., 2017). In some cases, optimizing a proxy can be viewed as MAP estimation under an interpretability-encouraging prior (Tibshirani, 1996; Bach, 2010). These proxy-based approaches assume that it is possible to formulate a notion of interpretability that is a computational property of the model, and that we know a priori what that property is. Lavrac (1999) shows a case where doctors prefer longer decision trees over shorter ones, which suggests that these proxies do not fully capture what it means for a model to be interpretable in all contexts. Through our approach, we place an interpretability-encouraging prior on arbitrary classes of models that depends directly on human preferences.

Learning from human feedback

Since interpretability is difficult to quantify mathematically, Doshi-Velez and Kim (2017) argue that evaluating it well requires a user study. Many works in interpretable machine learning have user studies: some advance the science of interpretability by testing the effect of explanation factors on human performance on interpretability-related tasks (Poursabzi-Sangdeh et al., 2018; Narayanan et al., 2018) while others compare the interpretability of two classes of models through A/B tests (Lakkaraju et al., 2016; Kim et al., 2014). More broadly, there exist many studies about situations in which human preferences are hard to articulate as a computational property and must be learned directly from human data. Examples include kernel learning (Tamuz et al., 2011; Wilson et al., 2015)

, preference based reinforcement learning

(Wirth et al., 2017; Christiano et al., 2017)

and human based genetic algorithms

(Kosorukoff, 2001). Our work resembles human computation algorithms (Little et al., 2010) applied to user studies for interpretability as we use the user studies to optimize for interpretability instead of just comparing a model to a baseline.

Model-based optimization

Many techniques have been developed to efficiently characterize functions in few evaluations when each evaluation is expensive. The field of Bayesian experimental design (Chaloner and Verdinelli, 1995) optimizes which experiments to perform according to a notion of which information matters. In some cases, the intent is to characterize the entire function space completely (Zhu et al., 2003; Ma et al., 2012), and in other cases, the intent is to find an optimum (Srinivas et al., 2009; Snoek et al., 2012). We are interested in this second case. Snoek et al. (2012)

optimize the hyperparameters of a neural network in a problem setup similar to ours. For them, evaluating the likelihood is expensive because it requires training a network, while in our case, evaluating the prior is expensive because it requires a user study. We use a similar set of techniques since, in both cases, evaluating the posterior is expensive.

3 Framework and Modeling Considerations

Figure 1: High-level overview of the pipeline

Our high-level goal is to find a model that maximizes where is a measure of human interpretability. We assume that computation is relatively inexpensive, and thus computing and optimizing with respect to the likelihood is significantly less expensive than evaluating the prior , which requires a user study. Our strategy will be to first identify a large, diverse collection of models with large likelihood , that is, models that explain the data well. This task can be completed without user studies. Next, we will search amongst these models to identify those that also have large prior . Specifically, to limit the number of user studies required, we will use a model-based optimization approach (Srinivas et al., 2009) to identify which models to evaluate. Figure 1 depicts the steps in the pipeline. Below, we outline how we define the likelihood and the prior ; in Section 4 we define our process for approximate MAP inference.

3.1 Likelihood

In many domains, experts desire a model that achieves some performance threshold (and amongst those, may prefer one that is most interpretable). To model this notion of a performance threshold, we use the soft insensitive loss function (SILF)-based likelihood

(Chu et al., 2004; Masood and Doshi-Velez, 2018). The likelihood takes the form of

where accuracy(X,M) is the accuracy of model on data and is given by

which effectively defines a model as having high likelihood if its accuracy is greater than .

In practice, we choose the threshold to be equal to an accuracy threshold placed on the validation performance of our classification tasks, and only consider models that perform above that threshold. (Note that with this formulation, accuracy can be replaced with any domain specific notion of a high-quality model without modifying our approach.)

3.2 A Prior for Interpretable Models

Some model classes are generally amenable to human inspection (e.g. decision trees, rule lists, decision sets (Freitas, 2014; Lakkaraju et al., 2016); unlike neural networks), but within those model classes, there likely still exist some models that are easier for humans to utilize than others (e.g. shorter decision trees rather than longer ones (Rokach and Maimon, 2014), or decision sets with fewer overlaps (Lakkaraju et al., 2016)). We want our model prior to reflect this more nuanced view of interpretability.

We consider a prior of the form:


In our experiments, we will define (human-interpretability-score) as:


where (mean response time) measures how long it takes users to predict the label assigned to a data point by the model , and max-RT

is a cap on response time that is set to a large enough value to catch all legitimate points and exclude outliers. The choice of measuring the time it takes to predict the model’s label follows

Doshi-Velez and Kim (2017), which suggests this simulation proxy as a measure of interpretability when no downstream task has been defined yet; but any domain-specific task and metric could be substituted into our pipeline including error detection or cooperative decision-making.

3.3 A Prior for Arbitrary Models

In the interpretable model case, we can give a human subject a model and ask them questions about it; in the general case, models may be too complex for this approach to be feasible. In order to determine the interpretability of complex models like neural networks, we follow the approach in Ribeiro et al. (2016), and construct a simple local model for each point by sampling perturbations of and training a simple model to mimic the predictions of in this local region. We denote this .

We change the prior in Equation 2 to reflect that we evaluate the HIS with the local proxy rather than the entire model:


We describe computational considerations for this more complex situation in Section 4.

4 Inference

Our goal is to find the MAP solution from Equation 1. Our overall approach will be to find a collection of models with high likelihood and then perform model-based optimization (Srinivas et al., 2009) to identify which priors to evaluate via user studies. Below, we describe each of the three main aspects of the inference: identifying models with large likelihoods , evaluating via user studies, and using model-based optimization to determine which to evaluate. The model from our set with the best is our approximation to the MAP solution.

4.1 Identifying models with high likelihood

In the model-finding phase, our goal is to create a diverse set of models with large likelihoods in the hopes that some will have large prior value and thus allow us to identify the approximate MAP solution. For simpler model classes, such as decision trees, we find these solutions via running multiple restarts with different hyperparameter settings and rejecting those that do not meet our accuracy threshold. For neural networks, we jointly optimize a collection of predictive neural networks with different input gradient patterns (as a proxy for creating a diverse collection) (Ross et al., 2018).

4.2 Computing the prior

Human-Interpretable Model Classes. For any model and data point , a user study is required for every evaluation of . Since it is infeasible to perform a user study for every value of for even a single model , we approximate the integral in Equation 2 via a collection of samples:

In practice, we use the empirical distribution over the inputs as the prior .

Arbitrary Model Classes. If the model is not itself human-interpretable, we define to be the integral over where locally approximates around (Equation 4). As before, evaluating requires a user study; however, now we must determine a procedure for generating the local approximations .

We generate these local approximations via a procedure akin to Ribeiro et al. (2016): for any , we sample a set of perturbations around , compute the outputs of model for each of those , and then fit a human-interpretable model (e.g. a decision-tree) to those data.

We note that these local models will only be nontrivial if the data point is in the vicinity of a decision boundary; if not, we will not succeed in fitting a local model. Let denote the set of inputs that are near the decision boundary of . Since we defined HIS to equal max-RT when is as it does when no local model can be fit (see Equation 3), we can compute the integral in Equation 4 more intelligently by only seeking user input for samples near the model’s decision boundary:


where . The first term (the volume of in ), and the third term (the volume of not in ) can be approximated without any user studies by attempting to fit local models for each point in (or a subsample of points). We detail how we fit local explanations and define the boundary in Appendix C.

4.3 Model-based Optimization of the MAP Objective

The first stage of our optimization procedure gives us a collection of models with high likelihood . Our goal is to identify the model in this set that is the approximate MAP, that is, maximizes , with as few evaluations of as possible.

Let be the set of all labeled models , that is, the set of models for which we have evaluated . We estimate the values (and uncertainties) for the remaining unlabeled models—set —via a Gaussian Process (GP) (Rasmussen, 2006). (See Appendix  A for details about our model-similarity kernel.) Following Srinivas et al. (2009), we use the GP upper confidence bound acquisition function to choose among unlabeled models that are likely to have large (this is equivalent to using the lower confidence bound to minimize response time):

where is a hyperparameter that can be tuned, are parameters of the GP, is the GP mean function, and

is the GP variance. (We find

works well in practice.)

5 Experimental Setup

In this section, we provide details for applying our approach to four datasets. Our results are in Section 6.

Datasets and Training Details

We test our approach on a synthetic dataset as well as the mushroom, census income, and covertype datasets from the UCI database (Dheeru and Karra Taniskidou, 2017)

. All features are preprocessed by z-scoring continuous features and one-hot encoding categorical features. We also balance the classes of the first three datasets by subsampling the more common class. (The sizes reported are after class balancing. We do not include a test set because we do not report held-out accuracy.)

(a) For each pair of proxies (A, B) for interpretability, we first identify the best model if we only care about proxy A, then compute its rank if we now care about proxy B. This simulates the setting where we optimize for proxy B, but A is the true HIS. This value for each pair of proxies is plotted with an . The large ranking value indicates that sometimes proxies disagree on which models are good.
(b) Rank of the best model(s) by each proxy across multiple samples of data points (‘N.Z.’ denotes non-zero and ‘feats.’ denotes features). This simulates the setting where we compute HIS on a human accessible number of data points. The lines dropping below the high values in Figure 1(a) indicate that computing the right proxy on a human-accessible number of points is better than computing the wrong proxy accurately. This benefit occurs across all datasets and models, but it takes more samples for neural networks on Covertype than the others.
Figure 2: Determining interpretability on a few points is better than using the wrong proxy.
  • [leftmargin=*]

  • Synthetic (, , continuous). We build a data set with two noise dimensions, two dimensions that enable a lower-accuracy, interpretable explanation, and two dimensions that enable a higher-accuracy, less interpretable explanation. We use an 80%-20% train-validate split. (See Figure 5 in the Appendix.)

  • Mushroom (, categorical with distinct values). The goal is to predict if the mushroom is edible or poisonous. We use an 80%-20% train-validate split.

  • Census (, continuous, categorical with distinct values). The goal is to predict if people make more than $/year. We use their 60%-40% train-validate split.

  • Covertype (, continuous, categorical with distinct values). The goal is to predict tree cover type. We use a 75%-25% train-validate split.

Our experiments include two classes of models: decision trees and neural networks. We train decision trees for the simpler synthetic, mushroom and census datasets and neural networks for the more complex covertype dataset. Details of our model training procedure (that is, identifying models with high predictive accuracy) are in Appendix B. The covertype dataset, because it is modeled by a neural network, also needs a strategy for producing local explanations; we describe our parameter choices as well as provide a detailed sensitivity analysis to these choices in Appendix C.

Proxies for Interpretability

An important question is whether currently used proxies for interpretability, such as sparsity or number of nodes in a path, correspond to some HIS. In the following we will use four different interpretability proxies to demonstrate the ability of our pipeline to identify models that are best under these different proxies, simulating the case where we have a ground truth measure of HIS. We show that (a) different proxies favor different models and (b) how these proxies correspond to the results of our user studies.

The interpretability proxies we will use are: mean path length, mean number of distinct features in a path, number of nodes, and number of nonzero features. The first two are local to a specific input while the last two are global model properties (although these will be properties of local proxy models for neural networks). These proxies include notions of tree depth (Rokach and Maimon, 2014) and sparsity (Lipton, 2016; Poursabzi-Sangdeh et al., 2018). We compute the proxies based on a sample of points from the validation set (the same set of points is used across models).

Human Experiments

In our human subjects experiments, we quantify for a data point and a model as a function of the time it takes a user to simulate the label for with . We extend this to the locally interpretable case by simulating the label according to . We refer to the model itself as the explanation in the globally interpretable case, and the local model as the explanation in the locally interpretable case. Our experiments are closely based on those in Narayanan et al. (2018). We provide users with a list of feature values for features used in the explanation and a graphical depiction of the explanation, and ask them to identify the correct prediction. Figure 6(a) in Appendix D depicts our interface. These experiments were reviewed and approved by our institution’s IRB. Details of the experiments we conducted with machine learning researchers and details and results of a pilot study [not used in this paper] conducted using Amazon Turk are in Appendix D.

6 Experimental Results

Optimizing different automatic proxies results in different models. For each dataset, we run simulations to test what happens when the optimized measure of interpretability does not match the true HIS. We do this by computing the best model by one proxy–our simulated HIS, then identifying what rank it would have had among the collection of models if one of the other proxies–our optimized interpretability measure–had been used. A rank of indicates that the model identified as the best by one proxy is the same as the best model for the second proxy; more generally a rank of indicates that the best model by one proxy is the th-best model under the second proxy. Figure 1(a) shows that choosing the wrong proxy can seriously mis-rank the true best model. This suggests that it is not a good idea to optimize an arbitrary proxy for interpretability in the hopes that the resulting model will be interpretable according to the truly relevant measure. Figure 1(a) also shows that the synthetic dataset has a very different distribution of proxy mis-rankings than any of the real datasets in our experiments. This suggests that it is hard to design synthetic datasets that capture the relevant notions of interpretability since, by assumption, we do not know what these are.

Computing the right proxy on a small sample of data points is better than computing the wrong proxy. For each dataset, we run simulations to test what happens when we optimize the true HIS computed on only a small sample of points–the size limitation comes from limited human cognitive capacity. As in the previous experiment, we compute the best model by one proxy–our simulated HIS. We then identify what rank it would have had among the collection of models if the same proxy had been computed on a small sample of data points. Figure 2 shows that computing the right proxy on a small sample of data points can do better than computing the wrong proxy. This holds across datasets and models. This suggests that it may be better to find interpretable models by asking people to examine the interpretability of a small number of examples—which will result in noisy measurements of the true quantity of interest—rather than by accurately optimizing a proxy that does not capture the quantity of interest.

Figure 3: We ran random restarts of the pipeline with all datasets and proxies–denoted ‘opt’ (randomness from choice of start), and compared to randomly sampling the same number of models–denoted ‘rd’ (we account for models with the same score by computing the lowest rank of any model with that score). ‘NZ’ denotes non-zero and ‘feats’ denotes features. The fact that the solid lines stay below the corresponding dotted lines indicates that we do better than random guessing.

Our model-based optimization approach can learn human-interpretable models that correspond to a variety of different proxies on globally and locally interpretable models. We run our pipeline times for iterations with each proxy as the signal (the randomness comes from the choice of starting point), and compare to random draws of models. We account for multiple models with the same score by computing the lowest rank for any model with the same score as the model we sample. Figure 3 shows that across all three datasets, and across all four proxies, we do better than randomly sampling models to evaluate.

(a) We computed response times for each iteration of the pipeline on two datasets. Each data point is the mean response time for a single user. In both experiments, we see the mean response times decrease as we evaluate more models. We reach times comparable to those of the best proxy models. The last 2 models are our baselines (‘NZ feats’ denotes non-zero features).
(b) We computed the proxy scores for the model evaluated at each iteration of the pipeline. On the mushroom dataset, our approach converges to models with the fewest nodes and shortest paths, and on the census dataset, it converges to models with the fewest features. ‘Mush’ denotes the mushroom dataset and ‘Cens’ denotes the census dataset.
Figure 4: Human subjects pipeline results show a trend towards interpretability.

Our pipeline finds models with lower response times and lower scores across all four proxies when we run it with human feedback. We run our pipeline for iterations on the census and mushrooms datasets with human response time as the signal. We recruited a group of machine learning researchers who took all quizzes in a single run of the pipeline, with models iteratively chosen from our model-based optimization. Figure 3(a) shows the distributions of mean response times decreasing as we evaluate more models. (In Figure 6(b) in Appendix D we demonstrate that increases in speed from repeatedly doing the task are small compared to the differences we see in Figure 3(a); these are real improvements in response time.)

On different datasets, our pipeline converges to different proxies. In the human subjects experiments above, we tracked the proxy scores of each model we evaluated. Figure 3(b) shows a decrease in proxy scores that corresponds to the decrease in response times in Figure 3(a) (our approach did not have access to these proxy scores). On the mushroom dataset, our approach converged to a model with the fewest nodes and the shortest paths, while on the census dataset, it converged to a model with the fewest features. This suggests that, for different datasets, different notions of interpretability are important to users.

7 Discussion and Conclusion

We presented an approach to efficiently optimize models for human-interpretability (alongside prediction) by directly including humans in the optimization loop. Our experiments showed that, across several datasets, several reasonable proxies for interpretability identify different models as the most interpretable; all proxies do not lead to the same solution. Our pipeline was able to efficiently identify the model that humans found most expedient for forward simulation. While the human-selected models often corresponded to some known proxy for interpretability, which proxy varied across datasets, suggesting the proxies may be a good starting point but are not the full story when it comes to finding human-interpretable models.

That said, the direct human-in-the-loop optimization has its challenges. In our initial pilot studies [not used in this paper] with Amazon Mechanical Turk (Appendix D), we found that the variance among subjects was simply too large to make the optimization cost-effective (especially with the between-subjects model that makes sense for Amazon Mechanical Turk). In contrast, our smaller but longer within-subjects studies had lower variance with a smaller number of subjects. This observation, and the importance of downstream tasks for defining interpretability suggest that interpretability studies should be conducted with the people who will use the models (who we can expect to be more familiar with the task and more patient).

The many exciting directions for future work include exploring ways to efficiently allocate the human computation to minimize the variance of our estimates via intelligently choosing which inputs to evaluate and structuring these long, sequential experiments to be more engaging; and further refining our model kernels to capture more nuanced notions of human-interpretability, particularly across model classes. Optimizing models to be human-interpretable will always require user studies, but with intelligent optimization approaches, we can reduce the number of studies required and thus cost-effectively identify human-interpretable models.


IL acknowledges support from NIH 5T32LM012411-02. All authors acknowledge support from the Google Faculty Research Award and the Harvard Dean’s Competitive Fund. All authors thank Emily Chen and Jeffrey He for their support with the experimental interface, and Weiwei Pan and the Harvard DTaK group for many helpful discussions and insights.


  • Altendorf et al. [2005] Eric E. Altendorf, Angelo C. Restificar, and Thomas G. Dietterich. Learning from sparse data by exploiting monotonicity constraints. In

    Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence

    , UAI’05, pages 18–26, Arlington, Virginia, United States, 2005. AUAI Press.
  • Bach [2010] Francis R. Bach. Structured sparsity-inducing norms through submodular functions. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 118–126. Curran Associates, Inc., 2010.
  • Caruana et al. [2015] Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1721–1730. ACM, 2015.
  • Chaloner and Verdinelli [1995] Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. Statist. Sci., 10(3):273–304, 08 1995.
  • Christiano et al. [2017] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4299–4307. Curran Associates, Inc., 2017.
  • Chu et al. [2004] Wei Chu, S. S. Keerthi, and Chong Jin Ong.

    Bayesian support vector regression using a unified loss function.

    IEEE Transactions on Neural Networks, 15(1):29–44, Jan 2004.
  • Dheeru and Karra Taniskidou [2017] Dua Dheeru and Efi Karra Taniskidou. UCI machine learning repository, 2017.
  • Doshi-Velez and Kim [2017] Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv, 2017.
  • Freitas [2014] Alex A. Freitas. Comprehensible classification models: A position paper. SIGKDD Explor. Newsl., 15(1):1–10, March 2014.
  • Hinton [2010] Geoffrey Hinton.

    A practical guide to training restricted boltzmann machines.

    Momentum, 9(1):926, 2010.
  • Kim et al. [2014] Been Kim, Cynthia Rudin, and Julie A Shah. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1952–1960. Curran Associates, Inc., 2014.
  • Kosorukoff [2001] Alex Kosorukoff. Human based genetic algorithm. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, volume 5, 05 2001.
  • Lakkaraju et al. [2016] Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1675–1684, New York, NY, USA, 2016. ACM.
  • Lavrac [1999] Nada Lavrac. Selected techniques for data mining in medicine. Artificial Intelligence in Medicine, 16(1):3 – 23, 1999. Data Mining Techniques and Applications in Medicine.
  • Lipton [2016] Zachary Chase Lipton. The mythos of model interpretability. CoRR, abs/1606.03490, 2016.
  • Little et al. [2010] Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. Turkit: Human computation algorithms on mechanical turk. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology, UIST ’10, pages 57–66, New York, NY, USA, 2010. ACM.
  • Ma et al. [2012] Yifei Ma, Roman Garnett, and Jeff G. Schneider. Submodularity in batch active learning and survey problems on gaussian random fields. CoRR, abs/1209.3694, 2012.
  • Masood and Doshi-Velez [2018] Muhammad A. Masood and Finale Doshi-Velez. A particle-based variational approach to bayesian non-negative matrix factorization. arXiv, 2018.
  • Narayanan et al. [2018] Menaka Narayanan, Emily, Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. ArXiv e-prints, February 2018.
  • Pedregosa et al. [2011] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Oivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • Poursabzi-Sangdeh et al. [2018] Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna M. Wallach. Manipulating and measuring model interpretability. CoRR, abs/1802.07810, 2018.
  • Rasmussen [2006] Carl Edward Rasmussen. Gaussian processes for machine learning. MIT Press, 2006.
  • Ribeiro et al. [2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.

    "why should i trust you?": Explaining the predictions of any classifier.

    In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, New York, NY, USA, 2016. ACM.
  • Rokach and Maimon [2014] Lior Rokach and Oded Maimon. Introduction to Decision Trees, chapter Chapter 1, pages 1–16. WORLD SCIENTIFIC, 2nd edition, 2014.
  • Ross et al. [2017] Andrew Ross, Isaac Lage, and Finale Doshi-Velez. The neural lasso: Local linear sparsity for interpretable explanations. In Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments, 31st Conference on Neural Information Processing Systems, 2017.
  • Ross et al. [2018] Andrew Ross, Weiwei Pan, and Finale Doshi-Velez. Learning qualitatively diverse and interpretable rules for classification. In 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), 2018.
  • Snoek et al. [2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2951–2959. Curran Associates, Inc., 2012.
  • Srinivas et al. [2009] Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian Process Bandits without Regret: An Experimental Design Approach. Technical Report arXiv:0912.3995, Dec 2009. Comments: 17 pages, 5 figures.
  • Tamuz et al. [2011] Omer Tamuz, Ce Liu, Serge J. Belongie, Ohad Shamir, and Adam Tauman Kalai. Adaptively learning the crowd kernel. CoRR, abs/1105.1033, 2011.
  • Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • Ustun and Rudin [2017] Berk Ustun and Cynthia Rudin. Optimized risk scores. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 1125–1134, New York, NY, USA, 2017. ACM.
  • Wilson et al. [2015] Andrew G Wilson, Christoph Dann, Chris Lucas, and Eric P Xing. The human kernel. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2854–2862. Curran Associates, Inc., 2015.
  • Wirth et al. [2017] Christian Wirth, Riad Akrour, Gerhard Neumann, and Johannes Fürnkranz. A survey of preference-based reinforcement learning methods. Journal of Machine Learning Research, 18(136):1–46, 2017.
  • Wu et al. [2018] Mike Wu, Michael C. Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, and Finale Doshi-Velez. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • Zhu et al. [2003] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani.

    Combining active learning and semi-supervised learning using gaussian fields and harmonic functions.

    In ICML 2003 workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pages 58–65, 2003.

Appendix A Similarity Kernel for Models and GP parameters

Model-based optimization requires as input a notion of similarity. We use an RBF kernel between feature importances for decision trees, and between a gradient-based notion of feature importance for neural networks (average magnitude of the normalized input gradients for each class logit).

We use the scikit-learn implementation of Gaussian processes [Pedregosa et al., 2011]. We set it to normalize automatically, restart the optimizer times, and add to the diagonal of the kernal at fitting to mitigate numerical issues. We used the default settings for all other hyperparameters, including the RBF kernel (on the model features above) for the covariance function.

Figure 5: We build a synthetic data set with two noise dimensions, two dimensions that enable a lower-accuracy, interpretable explanation, and two dimensions that enable higher-accuracy, less interpretable explanation. The purple data points are positive and the yellow are negative. Data points were generated for each set of two features independently, then points sharing the same label in all dimensions were randomly concatenated to form the final dataset.

Appendix B Experimental Details: Identifying a Collection of Predictive Models

We train decision trees for the synthetic, mushroom and census datasets with a test accuracy thresholds of , and respectively. On the synthetic dataset, is slightly higher than the accuracy we can achieve on the interpretable dimensions. We make this choice to avoid learning the same, simple model over and over again. On the mushroom dataset, we can achieve a validate accuracy of with decision trees, and on the Census dataset, we can achieve a validate accuracy of with decision trees. In both cases, we set the accuracy thresholds slightly below these numbers to ensure that we can generate distinct models that meet the accuracy threshold. For each of these, we train models.

To produce a variety of high-performing decision trees, we randomly sample the following hyperparameters: max depth [-], minimum number of samples at a leaf [, , ], max features used in a split [ - ], and splitting strategy [best, random]. The first two hyperparameters are chosen to encourage simple solutions, while the last two hyperparameters are chosen to increase the diversity of discovered trees. We use the scikit-learn implementation [Pedregosa et al., 2011], of decision trees and perform a post-processing step that removes leaf nodes iteratively when it does not decrease accuracy on the validation set (as in Wu et al. [2018]).

We train neural networks for the covertype dataset with an accuracy threshold of . We can achieve an accuracy of with logistic regression, so we set the threshold slightly above that to justify the use of more complex neural networks. For the neural network models, we randomly sample the following hyperparameters: L1 weight penalty [, , , ], L2 weight penalty [, , , ], L1 gradient regularization [,

], activation function [relu, tanh], architectures [three

-node layers, two -node layers, one -node layer, one -node layer, one -node layer]. These are then jointly trained according to the procedure in Ross et al. [2018] for epochs for batch size with Adam. (We train between and models simultaneously, another randomly sampled hyperparameter). For this dataset, we train models.

(a) We found the best model by each proxy for every setting of the region hyperparameters, and computed its rank by the same proxy for every other setting of the region hyperparameters. Each corresponds to one of these pairs. The highest values all correspond to the variance scaling factor . The other two settings of this hyperparameter tend to agree on how to rank neural networks.
(b) We found the best model(s) by each proxy and computed their rank by the same proxy computed on a sample of data points. The comparable values of the lines across all three plots indicate that we need a similar number of samples to robustly rank neural networks for the smallest, middle and largest region settings (we do not include cross pairs).
Figure 6: Neural network local explanation sensitivity analysis

Appendix C Experimental Details: Parameters and Sensitivity to Local Region Choices

We can ask humans to perform the simulation task directly using decision trees, but for the neural networks, we must train simple, local models as explanations (we use local decision trees). This procedure requires first sampling a local dataset for each point we explain. We modify the procedure in Ribeiro et al. [2016] to sample points in a radius around the point defined by its

nearest neighbors by Euclidean distance. We then binarize their predictions

to whether they match and subsample the more common class to balance the labels. We do not fit explanations for points where the original sampled points have a class imbalance greater than ; we consider these points not on the boundary. Finally, we return the simplest tree we train on this local dataset with accuracy above a threshold on a validate set. We randomly set aside of the sampled points for validation, and use the rest for training. (Note: if we were provided local regions by domain experts, we could use those.)

Our procedure for sampling points around some input

uses two hyperparameters: a scaling factor for the empirical variance, and a mixing weight for the uniform distribution for categorical features that we use to adjust the empirical distribution of the point’s

nearest neighbors. We use to weight the variance and to weight the categorical distributions. Finally, when training the trees, we set a local fidelity accuracy threshold of % on a validation set and iteratively fit trees with larger maximum depth (up to depth ) until one achieves this threshold. (We assume data points with local models deeper than this will not be interpretable, so fitting deeper trees will not improve our search for the most interpretable model.) We require at least samples at each leaf. We use the scikit-learn implementation [Pedregosa et al., 2011] to learn the trees and perform a post-processing step that removes leaf nodes iteratively when it does not decrease accuracy on the validation set (as in Wu et al. [2018]).

How sensitive are the results to these choices? In Figure 5(a), we first identify which of our models would be preferred by each interpretability proxy if the local regions were determined by variance parameters set to [, , ] and the mixing weights set to [, , ] ( combinations). Next, for each of those models, we identify what rank it would have had among the models if one of the other variance or weight parameters had been used. Thus, a rank of indicates that the model identified as the best by one parameter setting is the same as the best model under the second setting; more generally a rank of indicates that the best model by one parameter setting is the -best model under the second setting. The generally low ranks in the figure indicate agreement amongst the different choices for local parameter settings. The highest mismatch values for the number of nodes proxy all correspond to the variance scaling factor (which we do not use).

Do we need more points to estimate model rank correctly for any of these region settings? We find the best model(s) by each proxy, then we re-rank models using a small sample of points to compute the same proxy. We do this for the smallest, middle and largest settings of the local region parameters (we do not include cross-pairs of parameters in these results). Figure 5(b) shows that different hyperparameter settings require similar numbers of input samples to robustly approximate the integral for in equation 4 for a variety of interpretability proxies substituted for HIS.

Appendix D Experimental Details: Human Subject Experiments

In our experiments, we needed to sample input points to approximate the prior in equations 2 and 4. For globally interpretable models, we ask users about the same data points across all models to reduce variance. In the locally interpretable case, we would only conduct user studies for points near the boundary (in ) and would thus sample points specific to each model’s boundary. Each quiz contained or questions per model ( for the pipeline experiments, for the Amazon Turk experiments), with the order randomized across participants. There was also an initial set of practice questions. If the participant answered these correctly, we allowed them to move directly to the quiz. If they did not, we gave them an additional set of practice questions. We excluded people who answered fewer than 3 of each set of practice questions correctly from the Amazon Mechanical Turk experiments.

(a) An example of our interface with a tree trained on the census dataset with the fewest non-zero features. In our experiments, we show people a decision tree explanation and a data point including only the features that appear in the tree. We then ask them to simulate the prediction according to the explanation.
(b) We asked a single user to take the same quiz times to measure the effect of repetition on response time. The difference in mean response time between the first and last quiz is around seconds. The -axis scale is the same as that in 3(a) so the magnitude of the learning effect can be directly compared to the magnitude of the differences between models in our experiment.
Figure 7: Interface and learning effect

Experiments with Machine Learning Graduate Students and Postdocs

For the full pipeline experiment, models were chosen sequentially based on the subjects’ responses. We collected responses from subjects for each model in the experiment with the census dataset, and from subjects for each model with the mushroom dataset.111We recorded extra responses for iteration number , and fewer responses for iteration number in the census experiment, and extra response in iteration number for the mushroom experiment due to a technical error discovered after the experiment, but we do not believe these affected our overall results. (Extra responses are from the same set of participants.) Accuracies were all above . We ran iterations of the algorithm, each a quiz consisting of questions about one model, and two evaluations at the end of the same format. We used the mean response time across users to determine . We did not exclude responses, and participants were compensated for their participation. Using the same set of subjects across all of these experiments substantially reduced response variance, although the smaller total number of subjects means we did not see statistically significant differences in our results.

Experiments with Amazon Mechanical Turk

We had initially hoped to use Amazon Mechanical Turk for our interpretability experiments. Here, we were forced to use a between-subjects design (unlike above), because it would be challenging to repeatedly contact previous participants to take additional quizzes as we chose models to evaluate based on the acquisition function.

In pilot studies, we collected and responses for the two models selected by the pipeline (the first had a medium mean path length, and the second had a high mean path length), after excluding people who did not get one of the two sets of practice questions right, or who took less than seconds or more than minutes for any of the questions on the quiz. The majority of respondents were between and . We asked participants questions with a second break halfway through. We paid them $ for completing the quiz.

The first model, which had a medium mean path length, had a mean time of s (s - s), and a median time of s (s -

s) (standard error and median standard error in parentheses respectively). The second model with a high mean path length had a mean response time of

s (s - s), and a median response time of s (s - s). These intervals are clearly overlapping. We could gather more samples to reduce the variance, but cost grows quickly; running one experiment with the end-to-end pipeline with these sample sizes would have cost around $.