Interpreting Complex Regression Models

02/26/2018 ∙ by Noa Avigdor-Elgrabli, et al. ∙ Amazon Oath Inc. 0

Interpretation of a machine learning induced models is critical for feature engineering, debugging, and, arguably, compliance. Yet, best of breed machine learning models tend to be very complex. This paper presents a method for model interpretation which has the main benefit that the simple interpretations it provides are always grounded in actual sets of learning examples. The method is validated on the task of interpreting a complex regression model in the context of both an academic problem -- predicting the year in which a song was recorded and an industrial one -- predicting mail user churn.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Machine learning models are often divided to interpretable and non-interpretable (see (Rudin, 2014) for example). It is well accepted that simple models, such as

-nearest neighbors, linear classifiers and regressors, decision rules, and decision-trees can be interpreted by a professional. It is also well agreed that neural-networks, non-linear support-vector machines, and decision forests are hard to interpret. Informally, interpretability means that a professional is able to explain the prediction on given examples, as well as the relative importance of different features.

Proponents of using interpretable models often point to their importance as part of the broader data mining process: When feature engineering is a prolonged and iterative process (Anderson et al., 2013) the efficient prioritization of future efforts depends on understanding the predictions of the current model on key examples. Additionally, acknowledging that the data mining process is fallible requires debugging and trust building measures (Ribeiro et al., 2016; Angelino et al., 2017). The establishment of trust is often much easier if stakeholders can understand why the model predicts certain outcomes. Last, it was argued (Wallace, 2017) that modern privacy regulation such as the General Data Protection Regulation has introduced a ”right to explanation” of algorithmic decisions, which is hard to satisfy without model interpretability.

Opponents of using interpretable models answer those arguments with one decisive argument: Whichever model works best in the given machine learning task is the one which should be used. The frequent superiority of complex, hard to interpret, models in common machine learning tasks is the only reason those are used. All other considerations are secondary to performance at best or mare myth (Lipton, 2016)

at worst. The recent break-through performance of deep-learning techniques, as well as their fast development cycles, seem to have concluded the argument.

Regardless that complex models are becoming standard, the need to prioritize feature engineering and to gain trust in model prediction remains. Thus, the problem becomes one of interpreting a complex model. That is, gaining a measure of understanding about the main reasons a certain prediction has been given and an understanding of simple relations between features and labels.

One way by which a prediction can be interpreted is using sensitivity analysis (Saltelli et al., 2000). I.e., by measuring the effect of small variations in feature values on the prediction. The main problem with sensitivity analysis is that it requires an understanding of the features. Specifically, the meaning and legitimacy of ”small” changes differ between the features. For instance, what might be a ”small” change in a gender feature? One way around this problem, practiced by Robnik-S̆ikonja and Kononenko (Robnik-S̆ikonja and Kononenko, 2008) is to investigate the change in the output when the feature is missing altogether. In a recent paper, Tolomei et al. (Tolomei et al., 2017) address this problem by making use of naturally occurring variations in feature values among the learning examples when the underlying model is tree-based. Both these methods interpret a model in terms of individual examples and therefore ignore the distribution of variation in a population. Therefore, they run the risk of relating exceeding importance to rare variations in feature values just because those variations influence the model prediction.

Ribeiro at el. (Ribeiro et al., 2016)

go beyond explaining a model using single features by develping an algorithm, LIME, which finds regions of the feature space in which the prediction of the model can be approximated by a linear model. Since a linear model is considered interpretable, the combination of a region and a suitable linear model is a good explanation. LIME still suffers the disadvantage of ignoring the probability, or even the possibility, of variation in the feature distribution. Biran and McKeown

(Biran and McKeown, 2017) suggest considering both the effect of a feature and its importance, which is its average effect on the set of training examples. However, their work is limited to linear models.

We propose a model-agnostic method which interpret a model in terms of relations between events – distinct sets of examples – rather than single examples. Our method takes into account the actual probability of an event in the set of training examples. Hence, the explanations are not only aspects of the model but also of the population. For example, consider a model that predicts a user convergence probability. Our interpretation enables us to find explanations such as ”the set of users that predicted to have high convergence probability has a lower proportion of females than the complementary set”. The interpretations are therefore always grounded in real sub-populations of examples.

We implement our approach on two real regression problems: One is the explanation of a regression model which predicts the publication year of a song from the public million song dataset (Bertin-Mahieux et al., 2011) (Section 4.2). The other is the explanation of a user churn prediction model which is built from proprietary data (Section 4.1). In both instances, we address the machine learning problem with a gradient boosted decision forest (Ye et al., 2009; Chen and Guestrin, 2016) with dozens of trees. We demonstrate important inferences about the relation of certain features to the predicted label even though the model we interpret is complex.

2. Problem Definition

Consider a supervised learning problem. Let

be a set of possible features with values from domain and let be the domain of the label. A learning example is a pair of a feature values vector and a label denoted by . Let be a model which maps every example to a distribution on . We assume the model is trained on a set of learning examples by an algorithm whose objective is to maximize the probability of correct prediction , where is a test set of yet unseen samples.

We wish to describe the model in simple terms. As any function, can be described in terms of its partial derivatives in selected points. However, such derivatives often are meaningless if interpreted as variations in the features of a single example. Hence, we define the derivative of the distribution of labels in the distribution of examples. E.g., while it is unnatural to speak of the derivative of the label with respect to the user’s gender it is natural to speak of the proportion of males in a population.

Let be an i.i.d sample of examples. Let be a class of distributions in and let be its complement. Given a model , let be the subset of examples projects into , i.e., and , respectively.

Given a set of examples , we let be the empirical distribution of their features. Hence is the empirical distribution of the features of and is the respective empirical distribution defined by . We seek to describe in terms of the differences between and .

Measurement of the difference between empirical distributions is an extremely well studied problem. Some examples for such measurements are the Jensen-Shannon divergence, the Bhattacharyya distance, the Student’s t statistics and its related t-test, and more. In our implementation we focus on measurement of difference between marginal distributions of single features mainly in order to further simplify the description and to reduce measurement noise in what otherwise can be sparse data. We chose to use the Student’s t mainly because it can be easily computed on large data and because it is directional.

Recall that is the empirical distribution of the features of . For a feature subset indexed by the marginal distribution of these features of is denoted by . In particular, for we denote by the marginal distribution on the feature.

In this paper, instead of we often consider a subset of examples and its complement . With some abuse of notations, a projection of these examples on a feature subset (marginal) , defines marginal distribution and , respectively.

Now, we are going to formally define the model interpretation problem. Then, Example 2.2 will provide an example of the concepts defined in this section.

Definition 2.1 ().

Given a model , a set of examples , and a measure for distribution dissimilarity, the model interpretation problem is to select a class of distributions out of a family of such classes, and a marginal out of a family of marginals such that is maximized.

Now let us present a simple example demonstrating the concepts we defined earlier.

Example 2.2 (Distribution dissimilarity measure).

Assume a set of features and a set of examples

where every example is associated with a tuple of and feature values. Let the labels domain be , and assume that , , , with probability 1.

Then, considering feature we have a marginal distribution defined by the sample . With some abuse of notations let and then . Notice that the provided subsets and is only a one out of multiple options to split the labels domain . In this case, and .

Then, considering a projection of examples on feature , we can apply a measure as follows

To solve the model interpretation problem (Definition 2.1), i.e., to maximize the measure, one needs to consider the projections on , and all possible options to split the labels domain to and .

Specifically, this paper concerns with a problem in which the label domain is and the family of distributions is . The domain of all features is numeric except for missing values, . Interpretations are single feature with marginal distribution and the dissimilarity measure is the Student’s t-test.

To ease notation, for the rest of the paper we will refer to each label distribution by its mean value . We will refer to each class of label distributions by its mean value range (when referring to a set of examples). We denote by the overall class of distributions range received by applying on all examples in . As and are fixed we will denote by (we also use when and are clear from the context).

3. Algorithm

The number of potential interpretations of a model is linear in the number of features and quadratic in the number of unique predicted label values. In a complex regression model, it is common that the number of unique predicted label values would be proportional to the number of examples. Furthermore, since many of the features can be sparse, many examples are often needed to accurately measure the dissimilarity of values in-segment vs. out-of-segment feature values. Searching for the most informative set of descriptions is therefore tasking in terms of runtime performance.

This section describes a three steps algorithm which efficiently selects a set of interpretations for a model. The first step of the algorithm is to measure feature dissimilarity in and out of small bins of the label space. The second step considers the dissimilarity measurement as a random variable and searches, using an efficient linear algorithm, for larger segments in which the random variable is stationary. This greatly reduces the complexity of the problem. Then, at a third step, those larger segments scored, ranked, and clustered to produce the final outcome. The pseudo-code of the algorithm is described in Algorithm 

1 and a graphic representation of the process can be found in Figure 1. A further illustration of the segment selection process can been seen in Figure 2, showing the analysis phases done on one feature (see Section 4.2 for more details).

Figure 1. Graphic representation of the algorithm process. (a) Illustration of the input labels distribution. (b) The examples partition into small segments each with enough examples. (c) The CUSUM algorithm output given the sequence of the evaluation of each segment. (d) Illustration of all possible bigger segments. (e) The top two segments in terms of the dissimilarity measurement. Phase (b) is done regardless of the features values, phases (c)-(e) are executed for each feature separately.

Given example set over a feature set with labels in the range , and given parameters .

  1. Initial binning

    1. Partition into subranges , ,, as follows:

      1. Pick sample s.t. , and .

      2. , , else ( is in descending order)

    2. , :
      set

  2. Identify potential segments

    1. apply standard normalization
      (, )

    2. CUSUM()

    3. Order in descending order of the value

    4. Set as follows: Add segments from that doesn’t intersect previously picked segments.

  3. Output top segments

    1. Order in descending order of the value

    2. Set to be first segments from .
      If required, can be selected only over a predefined subset of features.

    3. Output

  4. Cluster segments

    1. Cluster to

    2. Set to be the set of segments, where a segments with highest value is taken from each for .

    3. Order in descending order of the value

    4. Set to be first segments from .

    5. Output

Algorithm 1 Model interpretation algorithm

3.1. Step 1: Initial binning

The first step of the algorithm is to divide the label space into small segments (bins) which are just large enough to compute the dissimilarity of feature values. We denote the bin by and set the values of by evenly dividing a large sample. Taking into account that some features may be sparse we aim to have for each feature and every bin at least one sample whose value is not missing. The number of segments is denoted by and the number of samples in an average segment by . See Figure 1(b) for a graphic representation.

For each feature and every segment we calculate their distributional dissimilarity. We henceforth denote the value of the dissimilarity function for feature on segment . The dissimilarity measure we used in our implementation is Student’s two samples .111

We experimented with other measures, such as the Kullback–Leibler divergence but saw no systematic difference.

Definition 3.1 ().

Given two samples from distributions , and . For , let denote the

sample average, standard deviation, and number of elements, respectively. The (student)

-sample t-test statistic is

3.2. Step 2: Identify Potential Segments

Given the initial binning, the second step of Algorithm 1 is to compute potential segments which may maximize the dissimilarity. For that, we use the CUSUM change point detection algorithm (PAGE, 1954; Hinkley, 1971). CUSUM detect, with some probability of an error, segments of bins in which the differences between the values of are due to random variation, rather than an actual change. The benefit of using CUSUM are that it is linear in the number of observations (bin) and yet asymptotically optimal in terms of the number of errors.

CUSUM has one parameter which is the size of changes which should be ignored. To be able to use the same parameter value with features whose

can vary we first normalize the values of every feature using z-score (i.e., subtract their mean and then divide by their variance). The normalized sequence is denoted by

. The output of CUSUM is a set of bins in which the value of seems to have gone a systematic change. This set (see Figure 1(c)) necessarily holds many fewer than bins.

The output of CUSUM is a set of change points. Dissimilarity measure such as Student’s increases with the sample size as long as there is no change in the distribution. However, the dissimilarity may still increase even if there has been a change (specifically if the change is minor comparing to other segments). Therefore, a potential segment maximizing the measure may start and end in any of the change points detected by CUSUM. Hence, in Step 2 of Algorithm 1 all potential segments are generated. Namely if contains all change points, we set to contain the start and the end locations of every potential segment (see Figure 1(d)). After that we pick the segments having highest values.

3.3. Steps 3 and 4: Clustering and selection

Here we explain last two steps of Algorithm 1.

Step 3 - Top Segments selection

The outcome of the maximal segment selection step is a small set of segments, for every feature, which maximizes the in-segment vs. out-of-segment dissimilarity of that feature. Yet the set of features is usually very large and features, as well as their dissimilar segments, tend to correlate. The step of the algorithm therefore selects a set of interesting and representative segments which should be presented to the user.

Unlike the previous steps, whose outcome is defined in terms of optimizing a statistics, this stage of Algorithm 1 optimizes the user experience. The simplest criteria by which segments are selected for presentation is their dissimilarity. We take advantage of the fact that the Student’s statistics scores all features on a joint scale, and use it as a scoring function. Hence, one view of the output is the top ten (or any other constant) most dissimilar segments.

Often, not all features are equally important. For instance, we find that some users are only interested in a subset of the features which they understand. Other users are only interested in aspect which they can control. A second option is therefore to rank only the segments of a subset of these features.

Step 4 - Clustering

Last, features are often strongly dependent. Thus, we find it useful to cluster the segments according to their important characteristics: Beginning, end, sign of the Student’s statistics, and the textual similarity of the feature name. We use -means clustering (Lloyd, 1982; Forgy, 1965)

with k-means++ initial seeding

(Arthur and Vassilvitskii, 2007). Inspired by (Bischof et al., 1999) we use a minimum description length (MDL) method to select the optimal value of . The MDL cost of every clustering solution is taken to be the sum of the log of the size of the vector of the centroid plus the sum of the log of the distance of every segment from its nearest centroid, and we choose the which minimizes this MDL cost.

(a) average value per initial segment
(b) t-test per initial segment
(c) selected segments
Figure 2. ‘Pop punk’ feature: an example to the segmentation process

4. Experiments

We applied the model interpretation algorithm to two regression models. The first is a model which was developed for an actual industrial use case of predicting changes in the activity of users of one of the largest e-mail service providers. The other is a model which was developed for the million songs database and which predicts a song’s release year. The second data set is freely accessible.

In both instances, a gradient-boosted decision forest (GBDT)
 (Ye et al., 2009; Chen and Guestrin, 2016) was trained to predict the numeric label. GBDT typically induces hundreds or thousands of trees each of whom has a numeric prediction in every leaf. The prediction of the forest is the weighted sum of the predictions of the trees. It is fair to say that GBDT is very hard to interpret directly. The most that existing packages provide is a feature importance metric which is the count of the tree splits which use every feature. However, in this statistic a feature which is used by 10% of the splits on the level of one hundred trees would have a higher count (10% of ) than a feature which is used in all of those trees roots.

4.1. Interpreting a user activity change model

A large mail provider is tracking the activity of its users using multiple metrics. The provider then trains a predictive model which predicts those changes. Beyond the value of expecting the change before it occurs, the main purpose the provider has is distilling the systematic aspects of user’s change in activity. However, since the predictive model is opaque, the provider requires an interpretation of the model.

The model we interpret is trained using data which was collected on one million users over a period of eight weeks. The features include user features (gender, age, zip code, country, state, Browser-Cookie age, device type and roaming), and user activity (user seen, counters of actions like read, send, search , and orgenize actions like archive, move, mark as unread, mark as Spam and delete). On top of the user activity we calculate raw counts and basic statistics like mean value and standard divination (stdDev) and more complex evaluations like the parameters of a fitted Holt-Winters (FC) time series model or the parameters of CUSUM change detection evaluation on the user’s activity vector. The label is the difference of activity in a key metric between the ninth and the tenth weeks. The predicted label is perceived by the provider as that user’s risk. Figure 3 depicts the success of the model in predicting a change in user behavior vs. a random benchmark. It can be seen that the model is able to capture about 20% of the users who are responsible to nearly 60% of the risk. This is despite a low Spearman correlation of 0.11 between the predicted label and the actual label.

Figure 3. Average change of activity as a function of the percent of the user population when ranked according to predicted risk vs. when ranked at random. The sharp increase of the average change at lower percentages indicate that the model captures those users who do change their activity.
(a) segment’s t-test per bin
(b) segment’s vs. complement population
Figure 4. Top segments over all features.
(a) advanced mail features
(b) user features
Figure 5. Interpretation in terms of advanced mail and user features
(a) One representative from each cluster
(b) ’ReadNum_meanVal’ cluster
(c) ’UserSeen’ cluster
(d) ’Bucket_totalValue’ cluster
Figure 6. Clustering of Segments.
Feature Importance
ReadNum_stdDevVal 154
ReadNum_meanVal 151
ReadNum_meanVal_CUSUM_ARL 150
ReadNum_meanVal_FC_ERROR 111
ReadNum_meanVal_FC_PREDICTION 88
UserSeen_FC_ALPHA 85
UserFeature_bcookieAge_max 84
UserSeen_FC_ERROR 80
UserAge 79
ReadNum_meanVal_FC_ALPHA 77
(a) GBDT feature importance
Feature Segment Student’s Importance
ReadNum_meanVal_CUSUM_ARL [0,102] -3030.06 150
ReadNum_meanVal_CUSUM_ARL [103,999] 3030.06 150
UserSeen [213,958] -2404.71
MailActionsCount [213,958] -2404.71 73
MailActionsCount [0,190] 2276.71 73
UserSeen [0,190] 2276.71
ReadNum_meanVal [532,941] -2170.19 151
CityId_totalValue [191,962] -2099.33 23
CityId_totalValue [0,171] 1989.84 23
ReadNum_totalVal [514,919] -1958.32 72
(b) Strongest interpretations
Table 1. Feature importance comparison - User Model

We executed the interpretation model with initial bins where the lower (left) bins correspond with the sharpest decrease in activity and the higher (right) with the sharpest increase. Because of business secrecy we cannot share actual numbers in this use case and instead we report percentiles. Figure 4(a) depicts the ten most dissimilar segments the algorithm has identified in terms of their label segment and their Student’s . Figure 4(b) depicts the ratio of the relative change of average feature value in the segment vs. out-of the segment. For instance, the feature ”ReadNum_meanVal_CUSUM_ARL” which measures how unpredictable a user’s reading behavior is turned out to be a very strong indication for high risk: For about the 10% of the users whose risk is the highest that feature is three times lower than average, that is the users are less predictable. For the other 90% the feature is on average four times higher than average. Other features, which are perhaps more surprising are a higher number of cities visited by the user, which relates to high risk and a lower mail readership which relates to low risk.

This ability to uncover hidden relations is further stressed when the user requests to interpret a model in terms of a specific set of features. Figure 5(a) depicts the most dissimilar segments of features related to advanced mail features such as searching the mailbox or organizing the mailbox. Such view is important when developing new features for the product. Our analysis revealed that higher risk users make more use of some of those advanced features, but have a higher standard deviation which might mean that they try the feature but do not attach to it. On the other hand, Figure 5(b) shows that lower risk users modify their ’StateId’ and use ’roaming’ more often, which probably implies that they travel more.

Simply sorting the explanations according to their Student’s is unsatisfying because many of the shown features are related to one another. For instance, the ”ReadNum_totalCount” feature counts the number of read mails and the ”MailActionsCount” feature also counts other actions but is dominated by reads. Clustering greatly increases the expressiveness of the output. Figure 6 depicts a summary of just one representative from each cluster (Fig. 6(a) and then a breakout of all of the segments which where clustered together for three specific clusters. This hybrid view better describes the way different features interact with the label and with each other.

Table 0(b) compares the interpretations provided by our algorithm to GBDT feature importance report. One drawback of the GBDT features importance is that it only takes into account the features used by the GBDT model. Therefore, if there are several correlated features GBDT might choose some of them and ignore the others while in our analysis all the informative features will arise. And indeed, the most striking difference between Table 0(b)(a) and Table 0(b)(b) is that ”UserSeen” (the number of active days the user had in eight weeks) have one of the most significant segments but not even listed in the feature importance report of GBDT. Apparently, when making split decisions, GBDT always found that other features where more informative at the local level and thus the global importance of the ”UserSeen” feature is overlooked in the feature importance report. Moreover, the order between the features by the GBDT table is determined by the number of appearance in the GBDT forest and not necessarily represent the true importance of the feature. For example, ”MailActionsCount” listed as number 17 in the GBDT feature importance report (not seen in the table) and well below the ”ReadNum_meanVal” feature. The strongest interpretation report unveils that in two specific segments of the label space ”MailActionsCount” is in fact much more related to the label than ”ReadNum_meanVal”.

Lastly, the most obvious difference is that unlike our method, GBDT may list its most important features but its feature report does not contain the labels range where features has an outstanding behaviour.

4.2. Interpreting a song release year model

To allow repeating our results on an academic dataset we built and then interpreted a model for the publicly available Million Song Dataset (MSD) (Bertin-Mahieux et al., 2011). That dataset consists of nearly one million songs and a large number of features for every song. As a toy prediction task, we chose to train a model which will predict the release year of a song. We scanned the meta parameters of GBDT by training one thousand models each with 25K examples and testing each model on 10K disjoint examples. We then chose the model which showed the best Spearman correlation between label and predicted label, 0.204. We stress that the predictive accuracy of the model is not central to the interpretation task, although a model which produces random prediction will have no interpretation.

(a) Segments range and Student’s
(b) Segments values vs. complement population values
Figure 7. Top segments over all features.
(a) Target audience age
(b) Gender
(c) Location
(d) Musical decade
Figure 8. Most descriptive segments of different features groups.
(a) One representative from each cluster
(b) ’indie’ cluster
(c) ’soul - early years’ cluster
(d) ’soul - late years’ cluster
Figure 9. Clustering of Segments

Having trained and selected the model, we used the full set of one million songs to interpret it. The MSD dataset has 7,400 features which relate to different musical, vocal and geographic features of the song as well as genre and textual features.

An example for one such feature is the ”pop_punk” genre indicator. Figure 2(a) exemplifies the propensity of that feature for songs which were released in different years. As can be seen, the genre gained popularity starting in early 90’s and became prominent after the turn of the millennium. Figure 2(b) depicts the dissimilarity of each of one hundred bins to the rest of the period. Figure 2(c) shows that the maximal dissimilarity is found if years are segmented to those before and those after roughly 1996.

Figure 7 depicts the most dissimilar feature segments in the release year prediction model. Some of those features are to be expected: more recent genre such as ”hip_hop” and ”indie_rock” describe songs which are predicted to be more recent. It is also not surprising that lower familiarity of the artist is indicative of an older release date. Still, it is educational to learn from our interpretation that loudness is the hallmark of songs which were released at and after the turn of the millennium.

Finally, the ability to uncover hidden relations is further stressed when the user requests to interpret a model in terms of a specific set of features. As example, Figure 8 depicts the most significant features while concentrating on age, gender, location related features and on musical decades features.

As can be seen in Figure 9, the interpretation algorithm provides a diverse description of the release date of a song. If the artist is familiar, then the year is recent and if the song is load then it was probably released after the 90’s. Soul music is a representative of pre-90’s music and if we look at the cluster of likewise features (Figure 9(c)) we find other early genre such as classic rock, funk, and r&b.

5. Conclusions

Over the past decade there is a clear trend withing the machine learning community towards ever more complex models. This trend is driven by breakthroughs in machine learning such as the advent of deep learning and by the exponential growth of both data and processing power. It seems that the age of simple, intelligible, models is over. However, that progress does not invalidate the need to understand the resulting model; a need which is rooted in the persisting functions of humans as designers and auditors of the learning process. The interpretation of complex machine learning models can be seen as the human-computer interaction (HCI) aspect of machine learning.

This work presents a concept and an algorithm for interpreting complex models. We expand the language of explanation to relations between populations of example and families of label distributions. We show that such explanations can be efficiently found from large data. We exemplify the meaningfulness of such explanations with two test cases: The generally available and well understood million-song database and an industrial problem from the mail serving domain.

In future work, we plan to develop explanations that are multivariate rather than univariate. We also plan to explore the value of different partitioning of the label space. For example, we can partition the label space by looking for features which have dissimilar distributions in a segment, below the segment, and above the segment. Alternatively, the label space can be partitioned using a distributional distance metric - e.g., the greater the proportion of males in the population the less likely the distribution of labels to be like a specific distribution.

References

  • (1)
  • Anderson et al. (2013) Michael R Anderson, Dolan Antenucci, Victor Bittorf, Matthew Burgess, Michael J Cafarella, Arun Kumar, Feng Niu, Yongjoo Park, Christopher Ré, and Ce Zhang. 2013. Brainwash: A Data System for Feature Engineering.. In CIDR.
  • Angelino et al. (2017) Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. 2017. Learning Certifiably Optimal Rule Lists. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 35–44.
  • Arthur and Vassilvitskii (2007) David Arthur and Sergei Vassilvitskii. 2007. K-means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’07). Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1027–1035. http://dl.acm.org/citation.cfm?id=1283383.1283494
  • Bertin-Mahieux et al. (2011) Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. 2011. The Million Song Dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011).
  • Biran and McKeown (2017) Or Biran and Katheleen McKeown. 2017. Human-Centric Justification of Machine Learning Predictions. In

    Proceedings of the 26th International Joint Conference on Artificial Intelligence

    (IJCAI ’17).
  • Bischof et al. (1999) Horst Bischof, Ales Leonardis, and Alexander Selb. 1999. MDL Principle for Robust Vector Quantisation. Pattern Anal. Appl. 2, 1 (1999), 59–72. https://doi.org/10.1007/s100440050015
  • Chen and Guestrin (2016) Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). ACM, New York, NY, USA, 785–794. https://doi.org/10.1145/2939672.2939785
  • Forgy (1965) E. Forgy. 1965. Cluster Analysis of Multivariate Data: Efficiency versus Interpretability of Classification. Biometrics 21, 3 (1965), 768–769.
  • Hinkley (1971) D.V. Hinkley. 1971. Inference about the Change-Point from Cumulative Sum Tests. Biometrika 58 (1971), 509–523. http://www.jstor.org/stable/2334386
  • Lipton (2016) Zachary C. Lipton. 2016. The Mythos of Model Interpretability. In ICML Workshop on Human Interpretability of Machine Learning.
  • Lloyd (1982) Stuart P. Lloyd. 1982. Least squares quantization in PCM. IEEE Trans. Information Theory 28, 2 (1982), 129–136. https://doi.org/10.1109/TIT.1982.1056489
  • PAGE (1954) E. S. PAGE. 1954. Continuous inspection schemes. Biometrika 41, 1–2 (1954), 100–115. https://doi.org/10.1093/biomet/41.1-2.100
  • Ribeiro et al. (2016) Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. 1135–1144. https://doi.org/10.1145/2939672.2939778
  • Robnik-S̆ikonja and Kononenko (2008) Marko Robnik-S̆ikonja and Igor Kononenko. 2008. Explaining Classification for Individual Instances. IEEE Transactions on Knowledge and Data Engineering (TKDE) 20 (2008), 589–600.
  • Rudin (2014) Cynthia Rudin. 2014. Algorithms for Interpretable Machine Learning. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’14). ACM, New York, NY, USA, 1519–1519. https://doi.org/10.1145/2623330.2630823
  • Saltelli et al. (2000) Andrea Saltelli, Karen Chan, E Marian Scott, et al. 2000. Sensitivity analysis. Vol. 1. Wiley New York.
  • Tolomei et al. (2017) Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking. In KDD.
  • Wallace (2017) Nick Wallace. 2017. EU’s Right to Explanation: A Harmful Restriction on Artificial Intelligence. (2017).
  • Ye et al. (2009) Jerry Ye, Jyh-Herng Chow, Jiang Chen, and Zhaohui Zheng. 2009. Stochastic Gradient Boosted Distributed Decision Trees. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM ’09). ACM, New York, NY, USA, 2061–2064. https://doi.org/10.1145/1645953.1646301