Latent Factor Interpretations for Collaborative Filtering

11/29/2017
by   Anupam Datta, et al.
0

Many machine learning systems utilize latent factors as internal representations for making predictions. However, since these latent factors are largely uninterpreted, predictions made using them are opaque. Collaborative filtering via matrix factorization is a prime example of such an algorithm that uses uninterpreted latent features, and yet has seen widespread adoption for many recommendation tasks. We present Latent Factor Interpretation (LFI), a method for interpreting models by leveraging interpretations of latent factors in terms of human-understandable features. The interpretation of latent factors can then replace the uninterpreted latent factors, resulting in a new model that expresses predictions in terms of interpretable features. This new model can then be interpreted using recently developed model explanation techniques. In this paper, we develop LFI for collaborative filtering based recommender systems, which are particularly challenging from an interpretation perspective. We illustrate the use of LFI interpretations on the MovieLens dataset demonstrating that latent factors can be predicted with enough accuracy for accurately replicating the predictions of the true model. Further, we demonstrate the accuracy of interpretations by applying the methodology to a collaborative recommender system using DB tropes and IMDB data and synthetic user preferences.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/10/2019

Deep Latent Factor Model for Collaborative Filtering

Latent factor models have been used widely in collaborative filtering ba...
08/17/2016

Dynamic Collaborative Filtering with Compound Poisson Factorization

Model-based collaborative filtering analyzes user-item interactions to i...
06/08/2021

Multi-output Gaussian Processes for Uncertainty-aware Recommender Systems

Recommender systems are often designed based on a collaborative filterin...
09/14/2019

Deep Collaborative Filtering with Multi-Aspect Information in Heterogeneous Networks

Recently, recommender systems play a pivotal role in alleviating the pro...
03/27/2013

Probabilistic Interpretations for MYCIN's Certainty Factors

This paper examines the quantities used by MYCIN to reason with uncertai...
11/03/2009

Feature-Weighted Linear Stacking

Ensemble methods, such as stacking, are designed to boost predictive acc...
11/20/2018

Explaining Latent Factor Models for Recommendation with Influence Functions

Latent factor models (LFMs) such as matrix factorization achieve the sta...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many machine learning systems utilize latent factors as internal representations for making predictions. Since these latent factors are largely uninterpreted, however, predictions made using them are opaque. Recommender systems that perform collaborative filtering via matrix factorization are prime examples of such machine learning systems. These systems are state-of-the-art in important application domains, including movie and social recommendations [Koren et al.2009, Kabiljo and Ilic2015]. However, these models are difficult to interpret because they express user preferences and item characteristics along a set of uninterpreted latent factors trained from a sparse set of user ratings.

We present Latent Factor Interpretation (LFI), a method for interpreting models by leveraging expressions of latent factors in terms of human-understandable features, and develop it in the particular setting of collaborative filtering. In order to interpret models that use uninterpreted latent factors, we address three challenges.

The first challenge is that learnt latent factors are constants uninterpretable to humans; any explanations in terms of these factors would be unintelligible. In order to address this problem, we learn a mapping from interpretable features to these latent factors. We then compose the mapping with the rest of the model. In our setting, we compose the interpretation of item latent factors with user latent factors to make recommendations (see Fig. 1). We call the composed model, a shadow model.

Our second challenge is that this composed shadow model still remains too complex for direct interpretation. However, since the shadow model expresses ratings in terms of interpretable features, we can leverage existing model explanation techniques [Datta et al.2016, Ribeiro et al.2016]. In particular, in this paper, we determine influential features using an existing technique [Datta et al.2016] (see Fig. 3 for an example). Note that the purpose of the shadow model is not to supplant the recommender system, but to interpret its predictions.

The third challenge is maintaining correspondence between interpretations and the models they explain. Re-expression of a system via a shadow model does not guarantee that the interpretations constructed from the shadow represent the functioning of the original. In our approach, we substitute predicted item latent factors but keep the remaining structure of the recommender system intact. Therefore the effects of the item factors on recommendations in the shadow model are identical to those of the original. Demonstrating a level of accuracy in predicting both the latent factors, and the resulting recommendations, we can claim that our interpretations are meaningful because the shadow model makes similar recommendations for similar reasons.

Figure 1: Shadow models constructed by training predictors for latent features from interpretable meta-data, and composing these predictors with the rest of the system.

Results of user studies[Tintarev and Masthoff2007] indicate features most important to movie recommendations largely include interpretable features which we find can be derived from auxiliary sources, such as average rating and keywords. In our example LFI interpretation of a movie recommender system, we predict the latent factors from such important interpretable features and others derived from auxiliary data sources including IMDB and DBTropes [IMDB2016, Kiesel2018]. An interpretation for a given recommendation thus indicates important, human-understandable features behind it, e.g., a high recommendation for Star Trek arose for a particular user because its genre is sci-fi, and it has keywords indicating action in space. Since the recommendations of the shadow model are close to the latent model, this interpretation also serves as an interpretation of the recommendations of the latent model.

As a proof of concept, we apply our techniques to a movie recommendation system based on matrix factorization over the popular MovieLens dataset with data integrated from several other movie databases, producing interpretable explanations for recommendations.

This technique of training an approximate, but interpretable shadow model for LFI is similar in spirit to approaches to explaining other machine learning systems [Craven and Shavlik1995, Ribeiro et al.2016, Sanchez et al.2015]. An important difference is that prior work has explored this idea using the features present in the task itself, or using pre-defined mappings to an interpretable space. Instead, we use externally available interpretable features and learn the mapping to an interpretable space. We also differ from existing approaches that attribute meaning to latent factors, e.g. with topic models[Rossetti et al.2013], in that the constructed shadow model is itself a recommendation model, albeit with interpretable inputs, and is therefore amenable to existing explanation techniques for machine learning models. We demonstrate this point by applying a recently developed input influence measure[Datta et al.2016] to build interpretable explanations for recommendations. We focus on this approach because it does not make assumptions about the complexity of the models involved and allows us to tailor explanations to individual users (or individual recommendations). User studies have identified the latter as the most important aspect of effective explanations[Tintarev and Masthoff2007].

This paper makes the following contributions:

  • We present Latent Factor Interpretation (LFI), a method for interpreting models by leveraging expressions of latent factors in terms of human-understandable features, and develop it in the particular setting of collaborative filtering.

  • We demonstrate how the approach applies to a real world use-case of a movie recommendation system trained from the MovieLens dataset and integrating auxiliary data from IMDB and DBTropes.

  • We demonstrate the accuracy of the approach for matrix-factorized models by constructing movie recommendation explanations for synthetic individuals with known preferences.

This paper is structured with a brief background (§2) on matrix factorization for recommender systems, and quantitative input influence which serve as the building blocks of our approach or its evaluation later in this paper. We then describe the construction of the shadow model and computation of influence as means for interpreting recommendations (§3). We demonstrate the utility of our approaches using synthetic and real use-cases derived from the MovieLens[GroupLens2017] movie database augmented with various information sources (§4). We discuss related work (§5) and conclude with a summary of contributions and directions for future work (§6).

2 Background

In a general sense recommender systems discover liked items, such as movies, previously not encountered by users. Numerous recommender systems have been proposed in literature, making use of varying forms of data and providing a variety of types of recommendations[Adomavicius and Tuzhilin2005].

In this section, we discuss a particular type of collaborative recommender system based on matrix factorization (§2.1). We conclude the section with an overview of quantitative input influence, the main tool we will employ to construct explanations for recommendations (§2.2).

2.1 Matrix factorization for Recommendations

Recommendation systems, as the name implies, are models that give recommendations to users regarding items they would enjoy or prefer. Formally, we are given a set of users a set of items, and a sparse by matrix of ground-truth ratings and need to fill in the missing elements of the matrix, that is, predict ratings.

A state of the art method for constructing recommendation models is via matrix factorization [Koren et al.2009]. The technique associates with each user a set of preferences over some number of latent features and with each movie associates a measure of expression of those features. Formally the model is composed of a by matrix and a by matrix and the predicted rating for each user movie pair is described by the by matrix . Thus each prediction for the rating of item by user , or is the dot product of the

-length vector

, expressing that user’s preferences for the latent factors, and the -length vector , expressing the extent with which item exhibits those latent factors. The model factors the ground truth matrix into the matrix product of and . The choice of , or rank, varies.

Several algorithms exist for this task though this choice not important in this paper. Our experimental results are based on the implementation of alternating least squares in Apache Spark. Our implementation and experiments are available online[artifact].

2.2 Quantitative input influence

We now briefly review a family of measures presented in Datta et al. [Datta et al.2016] called Quantitative Input Influence (QII), that measures the influence of a feature on the outcomes of a model. QII can be tailored to a particular quantity of interest about the system, such as the outcomes of a model over a population, the outcome for a particular instance or other statistics of the system. We use this influence measure to identify influential metadata in shadow models. In particular we will use QII to measure the influence of metadata on the predicted ratings for a specific user and movie pair.

At its core, QII measures the influence of features by breaking their potential correlations with other input features. This focuses measurements on the explicit use of a particular feature and not on use via correlated other features.

Formally, given an a model that operates on a feature vector , the influence of a feature is given by the expected difference in outcomes when feature is randomly perturbed:

The expectation in the above quantity is over samples of the feature , which is drawn independently from its marginal distribution.

3 Methods

Our approach to interpreting recommender systems based on matrix factorization comprises of two steps. First, we use publicly available interpretable features (i.e., metadata) about items as interpretable features to predict latent factors of these items. We then compose these models for predicting latent factors into models that predict the outcomes for particular users. Second, this shadow model composed of predictors for the latent factors is used to generate human-understandable explanations of outcomes by identifying the most influential interpretable features.

3.1 Metadata sources

In case of movies, we use several sources of publicly available metadata attributes such as genres, directors etc. that are one-hot encoded to obtain numerical features.

3.2 Shadow Model

We assume that we are given a matrix of interpretable attributes , with one row for each item . For each item latent factor , we train a predictor such that . Composing these predictors, the final predicted recommendation for a user and item can be approximated as follows:

Consequently, we use the composed model as a model that predicts the outcomes of the system for a movie with interpretable attributes and user . This shadow model is more interpretable insofar as it maps interpretable attributes to ratings. However, it is still fairly complex. Therefore, to interpret the behavior of the shadow model on a point , we examine the influences of interpretable attributes using QII.

3.3 Interpreting the Shadow Model

We interpret the shadow model by measuring the quantitative input influence of all metadata features on its output. This can be measured either on the output of a particular user-item pair, in which case the question being answered is “why were you given this recommendation?” or the entirety of the model’s predictions for this user over all items, in which case the measure would be answering “what has the model inferred about your preferences in general?”. In its raw form, an interpretation takes the form of a list of feature-influence pairs but can be naturally visualized as in Figure 3.

3.4 Measuring latent factor accuracy

We measure the quality of the shadow model by computing the mean absolute error of its predictions compared to the original model, that we call baseline.

Another metric of the quality of the shadow model is how close it agrees with the original model on the latent factors themselves. For each factor, we compute the mean absolute error (MAE) of latent factor prediction. Averaging over all factors, we get a measure of the overall latent factor agreement.

4 Results

We evaluated our methods on movie recommendations after integrating ratings data with several additional sources of movie metadata. We note some relevant details about the datasets we used in §4.1 and briefly describe our implementation in §4.2.

In §4.3

we describe our experiments over several combinations of parameters of both the recommender system itself and the shadow model. We find that the overall performance improves with higher ranks, and decision tree models perform the best in shadow models, although there can be a trade-off between rating and latent factor agreement.

We present the recommendation interpretations we can derive using these predictions in §4.4, finding that usually only a relatively small number of metadata features is influential in the final decision. Finally, in §4.5 we describe some experiments on synthetic data, with which we verify that this approach can derive the true causes of recommendations.

4.1 Datasets

The source of our data was MovieLens 20M Dataset[GroupLens2017], which contains approximately 20 million ratings of 27,000 movies by 138,000 distinct users. Ratings are on a 1-5 integer range.

Additionally we included various movie features from the Internet Movie DataBase (IMDB)[IMDB2016] and DBTropes data[Kiesel2018], a machine-readable snapshot of TV Tropes.

Overall, the three sources of data contain a wealth of movie information. Of the most relevant factors for recommendations as noted in user studies[Tintarev and Masthoff2007], a substantial portion can be determined to some extent from the metadata we have collected.

Pre-processing

We used all movie ratings from MovieLens 20M dataset for constructing a recommender system. For subsequent steps, however, we performed several pre-processing steps.

We encode nominal features via one-hot encoding, and in a feature selection step, we dropped those not meeting a minimal entropy threshold. For training and evaluating explanations, we also pruned away movies with missing or negligible metadata. We justify this step as a deployed recommendation explanation system could itself recognize its lack of metadata and notify users of said fact instead of providing a potentially inaccurate explanation for its recommendation.

4.2 Implementation

Our implementation is based on a set of Python programs that make use of the Apache Spark library for model training and evaluation.

4.3 Learning recommendations and latent features

The MovieLens ratings constituted the sparse user-item input matrix for the training of a recommender system. This data is also the ground truth for evaluation purposes later in this section. The ground truth was processed with alternating least squares matrix factorization algorithm, as implemented in Apache Spark MLlib, which outputs two matrices: user features and movie features, which encode user preferences and movie properties along low-dimensional space of latent features.

Figure 2: Mean absolute error of shadow models compared to real models with different parameters. Two metrics are shown: error in predicting latent factors of the real model, and error in predicting the ratings the real model gives

We trained recommender systems, constructed shadow models, and measured the prediction error of both individual latent factors (which can be then averaged across all of them) and the overall predicted ratings, iterating over several possible parameters (rank and regularization parameter for the recommender, type of the shadow model (linear or decision tree), and the number of bins and maximal depths for tree models). The results of these experiments are summarized in Figure 2.

It can be seen that linear models consistently perform worse on both metrics than decision trees. However, the difference in performance is much higher on observational agreement than on latent factor agreement. We hypothesize that it could be due to to the linear regression models having a consistent bias that adds up during the matrix multiplication which is consistent with our observations.

The observational agreement for linear models also gets worse with higher ranks, whereas the latent factor agreement gets better, which is also consistent with the bias hypothesis.

In our experiments, one model (Rank 50, tree, lambda 0.1, depth 5, bins 32) performed best on both metrics, but in general, there can be a trade-off between them. Namely, if we exclude the best-performing model, we can see that different models are the second-best for latent factor agreement (rank 20, tree, lambda 0.1, 8 bins, depth 3) and observational agreement (Rank 12, tree, lambda 0.1, 8 bins, depth 5), although their performance is reasonably close to each other.

4.4 Interpreting recommendations

(a) User 7’s recommendation about Lake Placid (1999)
(b) User 21’s recommendation about Lake Placid (1999)
(c) User 7’s recommendation about Inspector Gadget (1999)
(d) User 21’s recommendation about Inspector Gadget (1999)
Figure 3: A sampling of QII-based recommendation interpretations based on shadow models over the MovieLens 20M dataset for User 7 (left) and User 21 (right).

To construct interpretations of the recommendations produced by the shadow model, we measure the influence of each of the metadata features on the rating the shadow model produces. For a particular recommendation (user,movie pair), the definition of influence of a metadata feature (see §2.2) in this setting measures the expected change in the output of the recommender (the rating) if we substitute a fresh value for only that metadata feature with one sampled independently from its marginal, while all the other metadata features are kept fixed.

Several examples of the resulting influence measures can be seen in Figure 3. For two users, we see the top 10 most influential features in their recommendation for two different movies. We order the influential metadata features on the y-axis and chart their influence (which can be measured in ) on the x-axis.

4.5 Validation against Known Preferences

In the absence of user studies, we simulated users generating movie ratings based on known rules in order to verify the hypothesis that our system can detect the true user preferences. For this approach we generated a synthetic dataset based on a simple user preference and rating simulation.

In the simulation, a set of user preferences were generated to be expressible exactly in terms of randomly selected movie metadata features. Randomly assigned to a user, these preferences either increased or decreased their simulated movie ratings. Ratings were generated for randomly selected set of movies for each user.

We trained a matrix factorization model on the synthetic data set, and performed our analysis based on the shadow model construction and QII measurement. We then calculated a score from 0 to 1, indicating the closeness of the measured QII values compared to the known metadata features that were actually used to simulate ratings. This score is then compared to its value calculated with respect to sets of independently randomized or partially randomized user preferences (not the ones used to generate the synthetic dataset). This comparison is done in expectation that the real evaluation score will perform statistically significantly better on the synthetic personalities than on only partially related semi-random personalities, and in turn better yet than on fully random personalities. As demonstrated in Table 1, this hypothesis holds given sufficiently large sample sizes.

Additionally, we manually constructing a matrix factorization model that directly encodes simulated user preferences, and found that such a system perfectly captures synthetic personalities rather than just better than controls. This suggests that some of the errors in determining user preferences could be due to the recommender system trainer’s inability to learn the right predictive model (even if one exists) rather than due to inaccuracies in our system of shadow models.

Parameters t. mean s.r. mean r. mean t. > s.r. s.r. > r.
p e.s. p e.s.
N=20, 3 pr, rn 3, 15 h.e.f. 0.75 0.51 0 6e-11 3.3 1e-20 14.2
N=20, 8 pr, rn 3, 40 h.e.f. 0.26 0.2 0 0.03 0.8 8e-11 4.2
N=20, 8 pr, rn 8, 40 h.e.f. 0.4 0.3 2e-4 0.02 0.5 1e-23 4.5
N=20, 10 pr, rn 15, any 250 0.22 0.19 0 0.1 0.5 7e-11 4.2
N=49, 10 pr, rn 15, any 250 0.22 0.19 0 0.02 0.5 1.5e-27 4.6
Table 1: Synthetic data set hypotheses testing. The parameters of the experiments include: sample size, number of user preference profiles, rank of the matrix factorization model, and the strategy of selecting features for generating profiles. “h.e.f.” stands for highest-entropy features, “any 250” stands for any features with more than 250 non-zero values, “s.r.” stands for semi-random, “r.” stands for random, “t.” stands for true, “e.s.” stands for effect size, “pr” stands for profiles, “rn” stands for rank.

5 Related Work

Existing approaches to address the interpretability of latent factors either attempt to associate them with some item content, or to present them via the relationships they encode in the items and users of a system. We summarize these approaches in this section. We also discuss the relationship of our methods to other approaches for making machine learning interpretable via shadow models and interpretability constraints.

Associations

Rossetti et al.[Rossetti et al.2013] use topic modeling to extract topics from movie descriptions and then associate topics to latent factors in a matrix factorized model. Their topics are of the form of bags of words and are not as directly interpretable as movie features we consider in our work. Further, they develop said association so that the recommendation model can be portable; new users specify their preferences on topics and the technique can then provide them recommendations by injecting the topic-latent matrix within the usual matrix factor model.

Presentation

Koren et al.[Koren et al.2009] show that movies and users can sometimes be understood in terms of their proximity to other movies and users. Plotting users and movies according to their latent features or certain projections can result in recognizable clusters. These clusters can then be suggestive of user personalities and of movie characteristics that may have not been part of their extrinsic characteristics. For example, they show how groups of movies form clusters that roughly correspond to movies with strong female leads and fraternity humor.

In a related line of work, Hernando et al.[Hernando et al.2013] present a design of a tool in which recommendation explanations are of a form of a graph with users and movies as nodes, arranged to designate proximity in the latent feature space.

Shadow Models

Our approach of training an interpretable shadow model that mimics the behavior of the true uninterpretable model is similar to a general approach for explaining machine learning algorithms [Thrun1994, Craven and Shavlik1995, Lehmann et al.2010, Ribeiro et al.2016], and has also been applied to matrix factorization techniques [Sanchez et al.2015]. These approaches either use features present in the input space or map to an interpretable space using handcrafted mappings. LFI uses externally provided interpretable features and learns a mapping to the latent space. Similar to us, Gantner et al.[Gantner et al.2010] use externally provided interpretable features in order to train a shadow model. They do this to alleviate the cold-start problem as their shadow models allow them to recommend items without ratings. Our focus, however, are explanations for recommendations. Whereas they can recommend rating-less items, we can provide an explanation for such recommendations. We theorize that explanations can further alleviate the cold start problem, as explanations for recommendations of new items can encourage users to rate them.

Interpretability Constraints

An orthogonal approach to adding interpretability to machine learning is to constrain the choice of models to those that are interpretable by design. This can either proceed through regularization techniques such as Lasso [Tibshirani2011] that attempt to pick a small subset of the most important features, or by using models that structurally match human reasoning such as Bayesian Rule Lists [Letham et al.2015], Supersparse Linear Integer Models [Ustun et al.2013], or Probabilistic Scaling [Rüping2006]. For recommender systems, one approach that belongs to this family is non-negative matrix factorization (NMF) (see [Lee and Seung1999]), that enforces a level of interpretability by constraining latent features to be positive. Even for NMF, the mapping to interpretable features could be useful for discovering the concepts encoded in these latent factors.

Cold-start in collaborative filtering

Collaborative recommender systems like those based on matrix factorization suffer from cold-start problem: recommendations cannot be provided for new users or new items without an existing set of ratings by those users or for those items, respectively.

Several works address this problem by establishing connections between latent factors and content features, as we do in our approach for constructing explanations[Gantner et al.2010]

. However, our evaluation metrics are optimized for a different goal.

6 Conclusions and Future Work

We describe a method for interpreting recommendations of latent factor models for collaborative filtering. We construct a shadow model that agrees with the latent factor model in its predictions and its latent factors themselves, which are predicted from interpretable features available from auxiliary data sources. The metadata-expressed latent factors are then used to make recommendations like in the original model. In contrast to prior work, the shadow model is not interpretable by design. In fact, it is more complex than the original model. However, since its input features are interpretable, its recommendations can be explained using input influence measures from prior work.

We apply this method to a movie recommendation system based on matrix factorization over the popular MovieLens dataset with auxiliary data from IMDB and TV Tropes, producing interpretable explanations for recommendations. We find that the influence measures that quantify the impact of interpretable features on recommendation ratings in the shadow model are a reasonable and concise way of interpreting the functioning of the latent factor recommender system.

There are several avenues for future work. One interesting direction is the design of explanations for hybrid content/collaborative recommender systems which use some interpretable features along with user ratings, making them amiable to influence measures, though only partially via their interpretable inputs. Other open questions include formally characterizing conditions under which this interpretation method effectively reveals user preferences as well limits that arise from lack of informativeness in auxiliary data sources. A related direction involves validating these explanations with real users through user studies.

Acknowledgments

This work was developed with the support of NSF grant CNS-1704845 as well as by DARPA and the Air Force Research Laboratory under agreement number FA8750-15-2-0277. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of DARPA, the Air Force Research Laboratory, the National Science Foundation, or the U.S. Government.

References

  • [Adomavicius and Tuzhilin2005] Gediminas Adomavicius and Alexander Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE transactions on knowledge and data engineering, 17(6):734–749, 2005.
  • [Anonymous2018] Anon Anonymous. Supplementary code and experiments for submission. https://sites.google.com/site/explainfactors/, 2018.
  • [Craven and Shavlik1995] Mark W. Craven and Jude W. Shavlik. Extracting tree-structured representations of trained networks. In Proceedings of the 8th International Conference on Neural Information Processing Systems, NIPS’95, pages 24–30, Cambridge, MA, USA, 1995. MIT Press.
  • [Datta et al.2016] Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 598–617. IEEE, 2016.
  • [Gantner et al.2010] Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, Steffen Rendle, and Lars Schmidt-Thieme. Learning attribute-to-feature mappings for cold-start recommendations. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 176–185. IEEE, 2010.
  • [GroupLens2017] GroupLens. Movielens 20m dataset. https://grouplens.org/datasets/movielens/20m/, 2017. Accessed: 2017-03-24.
  • [Hernando et al.2013] Antonio Hernando, JesúS Bobadilla, Fernando Ortega, and Abraham GutiéRrez. Trees for explaining recommendations made through collaborative filtering. Information Sciences, 239:1–17, 2013.
  • [IMDB2016] IMDB. Imdb alternative interfaces. http://www.imdb.com/interfaces, 2016. Accessed: 2016-11-17.
  • [Kabiljo and Ilic2015] Maja Kabiljo and Aleksander Ilic. Recommending items to more than a billion people. https://code.facebook.com/posts/861999383875667/recommending-items-to-more-than-a-billion-people/, 2015. Accessed: 2016-11-17.
  • [Kiesel2018] Malte Kiesel. Dbtropes. http://skipforward.opendfki.de/wiki/DBTropes, 2018. Accessed: 2018-1-31.
  • [Koren et al.2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009.
  • [Lee and Seung1999] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788–791, 1999.
  • [Lehmann et al.2010] Jens Lehmann, Sebastian Bader, and Pascal Hitzler.

    Extracting reduced logic programs from artificial neural networks.

    Applied Intelligence, 32(3):249–266, June 2010.
  • [Letham et al.2015] Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan.

    Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model.

    Ann. Appl. Stat., 9(3):1350–1371, 09 2015.
  • [Ribeiro et al.2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, New York, NY, USA, 2016. ACM.
  • [Rossetti et al.2013] Marco Rossetti, Fabio Stella, and Markus Zanker. Towards explaining latent factors with topic models in collaborative recommender systems. In Database and Expert Systems Applications (DEXA), 2013 24th International Workshop on, pages 162–167. IEEE, 2013.
  • [Rüping2006] Stefan Rüping. Learning interpretable models. PhD thesis, Dortmund University of Technology, 2006. http://d-nb.info/997491736.
  • [Sanchez et al.2015] Ivan Sanchez, Tim Rocktaschel, Sebastian Riedel, and Sameer Singh. Towards extracting faithful and descriptive representations of latent variable models. AAAI Spring Syposium on Knowledge Representation and Reasoning (KRR): Integrating Symbolic and Neural Approaches, 2015.
  • [Thrun1994] Sebastian Thrun.

    Extracting rules from artificial neural networks with distributed representations.

    In Proceedings of the 7th International Conference on Neural Information Processing Systems, NIPS’94, pages 505–512, Cambridge, MA, USA, 1994. MIT Press.
  • [Tibshirani2011] Robert Tibshirani. Regression shrinkage and selection via the lasso: a retrospective. Journal of the Royal Statistical Society Series B, 73(3):273–282, 2011.
  • [Tintarev and Masthoff2007] Nava Tintarev and Judith Masthoff. Effective explanations of recommendations: user-centered design. In Proceedings of the 2007 ACM conference on Recommender systems, pages 153–156. ACM, 2007.
  • [Ustun et al.2013] Berk Ustun, Stefano Tracà, and Cynthia Rudin. Supersparse linear integer models for interpretable classification. ArXiv e-prints, 2013.