Embedding models for recommendation under contextual constraints

06/21/2019 ∙ by Syrine Krichene, et al. ∙ Criteo 0

Embedding models, which learn latent representations of users and items based on user-item interaction patterns, are a key component of recommendation systems. In many applications, contextual constraints need to be applied to refine recommendations, e.g. when a user specifies a price range or product category filter. The conventional approach, for both context-aware and standard models, is to retrieve items and apply the constraints as independent operations. The order in which these two steps are executed can induce significant problems. For example, applying constraints a posteriori can result in incomplete recommendations or low-quality results for the tail of the distribution (i.e., less popular items). As a result, the additional information that the constraint brings about user intent may not be accurately captured. In this paper we propose integrating the information provided by the contextual constraint into the similarity computation, by merging constraint application and retrieval into one operation in the embedding space. This technique allows us to generate high-quality recommendations for the specified constraint. Our approach learns constraints representations jointly with the user and item embeddings. We incorporate our methods into a matrix factorization model, and perform an experimental evaluation on one internal and two real-world datasets. Our results show significant improvements in predictive performance compared to context-aware and standard models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Embedding models, such as those based on matrix factorization, have become a core component of recommendation models. These models involve learning latent representations of users and items based on user-item interactions patterns, such as explicit feedback in the form of users’ ratings on items, or as implicit feedback in the form of clicks. Many applications use contextual constraints to refine recommendations; for instance, a user may specify a price range or product category of interest. The contextual constraints can be seen as filters where the displayed recommendation have to fit the chosen filters. The standard approach is to implement the retrieval of recommended items and the application of constraints as independent operations, which can lead to significant problems. Applying constraints either before or after retrieval can result in incomplete recommendations, or low-quality results for the tail of the distribution (less popular items).

The constraints can be incorporated to the model as context features in a context-based model. Using those models, suppose that the context is correlated to the user and give as a static information about it. In other words one user can be described by the set of contexts that he choose in the past. In this paper we suppose that the setting may change and one user can choose any context independently of his history but the constraint distribution still constant. This choice of setting is motivated by the sparsity of the observed data (the contextual constraints) and the characteristics of the constraints that can be continuous. The new context representation can be orthogonal to the user’s context representation and this can lead to spurious recommendation.

Instead of treating retrieval and constraint application as separate, independent operations, we propose merging filtering and retrieval into one operation in the embedding space by integrating information provided by the constraint into the similarity computation. We implement this by using the constraint to modify the user’s embedding vector, since the constraint specifies contextual information that denotes a similarity between the user and a class of items (that verifies the context). We present several different parametrizations of constraints, suitable for category-based filters. For constraints based on a continuous range of values (e.g., price, duration, etc.) we discretize and extract categories. User-item interactions for some types of contexts or constraints may be quite sparse. In this paper we focus on contextual constraints described by a set of features and not by words we then exclude search and more generally natural language processing tasks.

Our method consist of representing the context constraints as a transformation of the embedding space. We still learn one vector representation for each user and each item.

The main contributions of this work are the following:

  • We introduce a new framework that describes contextual constrained recommendation, using sparse data, and discuss the limitations of baseline models.

  • We propose a new approach adapted to a new setting for learning contextual constraint representations. The contextual constraint information incorporated into a matrix factorization model as a transformation of the user embedding space.

  • We introduce linear and non-linear transformations of the user embedding space and adapt these methods to neural matrix factorization models.

  • We evaluate different configurations of our approach, on different settings. These settings are motivated by real-life use cases. We compare our models to context-aware and non-context-aware models on one private and two public datasets.

We present the contextual setting framework and a review of related work and background on constraints for recommendation systems in Section 2. In Section 3 we introduce our modeling approaches for contextual constraints adapted to matrix factorisation model and neural matrix factorisation. Section 4 shows that our approaches can lead to significant improvements in predictive performance compared to baseline approaches.

2 Framework and Related Work

2.1 Framework for Recommendation under contextual constraints

A common approach to recommendation is to retrieve items for a user depending on the items’ relevance to the user. The relevance of an item to a user is only observed a posteriori, once the user has purchased or interacted with the item and provided feedback ; we denote the matrix containing feedback for each pair as . The vast majority of the entries in are unobserved, since users provide only limited feedback.

Hence, in order to recommend new items to the user, one needs to model the conditional distribution of given to infer the unobserved entries of the feedback matrix . While may contain different types of feedback (such as ratings, or the number of interactions), up to a normalization, we assume that . Practically, a simple approach is to model the conditional expectation of , which can be seen as a similarity between and :

Then, we can fit a parametric modelling

of by minimizing the empirical squared loss111We consider the squared loss for simplicity, but it could be replaced by any other Bregman divergence (ex: log-loss), as they all satisfy . on the data available at training time:

(1)

where is the number of training instances. We refer the reader to Sec. 2.2 for common modelling of .

Constrained Retrieval

In this work, we want to take into account how the feedback depends on some constraints on the items rated by the user , which should be treated as a context as soon as these constraints are informative of the feedback distribution. We want to take into account constraints of the form: "items of brand X", "restaurants opened at 7pm", "a movie in either the action or comedy genre", or disjunctions of the latter. To formalize this, we assume that there exists a feature map that associates one or several values in a finite set to each item , e.g. the genres of a movie. A set of values is represented by a binary vector in . For simplicity of notation, we will denote as . Finally, as a constraint can be a disjunction of values, it can also be formalized as the elements of the power set of , represented by binary vectors . An item is said to satisfy the constraint if , i.e. if the item has at least one value of the feature in common with the constraint (e.g. one matching movie genre).

Learning with constraints:

Now that we have formalized the set of constraints that can be input by a user to restrict the set of items she aims for, we consider our modeling of the reward process. We need to model the conditional distribution of given

, and we assume that we have a 3-D tensor

of feedback. Ultimately, we are interested in estimating the conditional expectation:

(2)

It is clear that due to the combinatorial nature of , we need to carefully consider our modeling choice for , in order to keep the inference complexity manageable and avoid substantial estimation problems. Given a finite set of available data , we will fit by minimizing:

(3)

Note here that we implicitly assume the joint distribution of

to be the same at training and test time, i.e., there is no distribution shift. A more robust (and statistically harder) task would be to learn to predict in the worst case over the possible constraints that the user may input at test time. Formally, this could be handled by replacing the marginalization over in (3) by an infinite norm. However, this norm would need to be relaxed, since the infinite norm is not straightforward to optimize; we therefore omit further discussion of the infinite norm.

2.1.1 Causal Graphical model

The causal graphical model of a dataset is the causal representation of the variables of the generative process of the data. In our setting we have the item, the user, the rating, and the constraint. There are several different possible representations for a given dataset. Usually we try to simplify the graph, and use the simplest possible model. Figure 1 shows the different possible causal graphical models for our setting (left to right):

  • In the first graph, we suppose that the user doesn’t directly cause the final rating, and it is only through the constraint that he impacts the rating for an item. Therefore, conditioning on both the item and the constraint allows us to fully explain the rating. The best model to use in this setting could be a simple linear model, with enough features to fully describe the constraints. This setting requires a complete description of the constraints.

  • In the second setting, we suppose that the constraint doesn’t have a direct impact on the rating. However, the constraint is correlated with the rating: the constraint indirectly informs us about the unknown user intent , and thus conditioning on it still allows us to improve the rating prediction.

  • The third setting is simply the combination of both previous scenarios.

While in theory the first and third settings are perfectly valid, the reality lies more often in the second setting: observing the constraint informs us about the user’s hidden intent, allowing to better predict the rating.

Figure 1: Causal graphical models for different settings while observing contextual constraints for a user , who interacted with an item , and gave feedback .

2.1.2 Contextual constraints provide information about instantaneous user utility

In cases where we observe the second graph in Figure 1, we know that the contextual constraint cannot impact the reward, as there is no causal link between the two nodes. However, knowing the contextual constraints set by the user may help us to refine the model. The constraints give us additional information about the intent of the user, and the observed reward varies according to the user intent, which is proportional to the user’s instantaneous utility.  [1]

describes the importance of time on the instantaneous utility of an item for a user: The instantaneous utility is usually described as a smooth convex function, which gives the reward for an item as a function of time. The utility time corresponds to the elapsed time from the moment when the user first observed the item. The utility estimates the extent of the user’s interest in purchasing an item, while the reward is the feedback from the user. The reward can be observed after the purchase, in case of item ratings, or immediately after the recommendation, such as for the click-based recommendation task. In case of ratings, it is possible to observe a maximum utility followed by a low reward: if the user is not satisfied with the purchased item, then we expect a very low reward (since the user had significant interest in purchasing the item, and is then disappointed with the item after purchase). For both cases, the reward is at its optimum when the instantaneous utility’s maximum is reached. Therefore the observed reward may depend on the time at which it is observed.

The instantaneous utility may introduce a bias in the learned model. For example, suppose we learn a model that predicts the probability of click under a normal setting, with no observed constraints. The model may recommend a good item for a particular user, but if the instantaneous utility is low, then the user may not click. In this case our evaluation metrics will indicate that the behaviour of the model is incorrect, and this will increase the variance of the evaluation metric. Another use case we consider occurs when the observations used for training contain non-clicked items observed with a very low user utility. In this case, the model will learn that such items are very bad to recommend for any user, which introduces error in the prediction.

In general recommendation settings, the dataset often doesn’t provide any information about instantaneous utility for users. In the contextual constraint-based setting (where all the observations contain contextual constraints), by setting a constraint the user gives us information about her intent, indicating that she is looking for an item with specific characteristics. Given this information we suppose that the expectation of the instantaneous utility computed over all users for the constraint setting is higher than the expectation computed for the general setting without constraints. We also expect lower variance for the contextual constraint setting. These phenomena reduce the prediction and learning error. In other words, the second and third graphs shown in Figure 1 can benefit from the contextual constraint setting when learning a contextual or non-contextual model.

2.2 Background and related work

Matrix factorization (MF) models have been one of the most popular and successful recommendation approaches of the past decade [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] to model

. These models learn latent representations of users and items based on patterns in observed interactions between users and items, using loss functions that are similar to (

1). Many variants of MF models exist, including probabilistic models [3, 4, 7, 8, 9, 10], models for implicit feedback [7, 9, 12, 15, 16, 18], models that support metadata [10, 11], and models that incorporate social network information [5, 6, 8], among others.

2.2.1 Matrix Factorization

The first family of approaches that we consider are MF models that do not involve context. MF models are based on dimensionality reduction. Let represent the embedding matrices for users and items, respectively, and represent the number of embedding dimensions. We therefore have and , and the similarity function between and is given by:

(4)

where is the bias term for the user. We fit by minimizing a penalized version of (1):

(5)

This model supposes that we ignore context, or gain no additional information from context, and the similarity between items and users is completely captured by examining the ratings. At recommendation time, cosine similarity is often used to compute the

best products to recommend to a specified user. Cosine similarity is computed over the vector representations of the user and the item. If the user and item vectors are orthogonal then the user is far from the item, and therefore the item will not be recommended to this user. If the user and item vectors have the same orientation, then the cosine similarity is at it’s highest value of 1, and the item will be recommended for this user. The best products to recommend are those that maximize the cosine similarity score.

2.2.2 Context-aware Matrix Factorization

Context-aware recommender systems (CARS) have been proposed in order to address a limitation of conventional recommendation models and incorporate contextual features into these models. In CARS, contextual features can be related to users or items. We consider the contextual constraints as item features, as context may change the utility observed for a particular item. This allows us to use Context-Aware Matrix Factorization (CAMF-CI), presented in [20], as a baseline model. If we consider contextual constraints as contextual features, then our loss function is:

(6)

where is the contextual feature matrix, and we have one variable to learn per feature per item. In practice we learn only the variable where the feature is compatible with the item . Due to the sparsity of the contextual constraint data the probability of observing orthogonal context vectors is high. If at recommendation time a user specifies a context with a vector that is orthogonal to his history, then the recommendation will be almost random since the model will recommend items from the tail of the item popularity distribution.

2.2.3 Context-aware collaborative filtering MF: NNMF

Context-aware collaborative filtering systems are a class of recommendation systems based on the past experiences of the user. Some of these models use the user’s timeline of past item interactions to predict future preferences. In our setting the timeline is not given and we assume that we cannot recover it. Context-aware collaborative filtering models use multiple features that describe users and items, which is not the case in our setting, as we only have access to the user id, item id, and contextual features. For all collaborative filtering models, features related to the user’s past perfectly describes the user’s feedback. In this paper we suppose that for a particular user, the set of contexts is not fully explored and may change at recommendation time. Therefore, it is problematic to recommend items to a user under a context vector that is orthogonal to the context vectors seen in his past. For example the model introduced in [21] learns one representation of a user for each context. If the user have never been seen in a particular context, then contextual representation of that user will be random. We use as a baseline model an adaptation of a neural matrix factorization model [22], where the embedding vector of a user or product is learned as the output of a neural net (NNMF):

(7)

with is the output vector of the neural net, which takes as input the contextual feature and the item . Unfortunately, the high capacity of this model is more of a handicap when the context space is combinatorial: it is very prone to over-fitting.

3 Adapted Embedding Models for Contextual Constraints

On one hand, we aim for more expressivity than CARS models, as the context encodes constraints which represent information regarding the user-item interaction, and not simply an additional item-centric or user-centric component to the score. On the other hand, we can’t afford to have as much flexibility as NNMF due to the combinatorial nature of . To resolve this dilemma, we propose using the context to parameterize the similarity between the user and item in the MF embedding space. Our approach represents the constraints as a transformation of the embedding space. Given two embedding matrices and , and a transformation , we have

(8)

This decomposition can be interpreted in two ways: represents the user in the context of the constraint provided as input, and represents an item in the context of a given filter. This formulation also has several useful scaling properties. First, the user embeddings and item embeddings are independent from the context, which allows us to avoid the need to parameterize the embedding computation as in NNMF, and help prevents overfitting. Second, contrary to CARS, this decomposition encodes the effect of the context on the interaction between users and items. Lastly, with careful choice of , the minimization show in (8

) can be done in an alternating fashion, which has several advantages compared to stochastic gradient descent of the joint loss in (

8), such as straightforward parallelization and better convergence properties.

For our experiments we use several different formulations of . If we look at  (8), while updating the user vector representation, the model will learn only one representation for the user that satisfies different contexts. This task is more difficult than the standard MF task. However, due to the sparsity of the data, a user rarely sets a very high number of contexts, and the task then becomes easier to solve.

3.1 as a linear transformation

Our first approach for formulating is to define it as a linear operator over a set of base transformations. Denoting as a tensor containing a -by- matrix for each value in , we define

Depending on the size of , we can use relatively simple variants of . For example, we could use a low-rank factorization of , or simply a slice-wise diagonal structure. For the sake of scalability, in our experiments we choose to constrain to be diagonal for each slice, i.e., for any , the -by- matrix is diagonal. We denote the resulting algorithm, using this formulation of and optimizing (8), as diagonal constraints for matrix factorization, or DC-MF. In Section 4, we also use a more constrained version as a baseline, which corresponds to setting such that for any , , where

denotes the identity matrix of size

. We denote this version of the model as weighted constraints for matrix factorization (WC-MF). We expect this model to provide good performance for settings with a high number of overlapping constraints, since it extracts correlations between features.

Regarding DC-MF, one may find the choice of the diagonal structure to be too restrictive. However when , the number of different values spanned by the feature map , is lower than the embedding size , this formulation usually proves to be sufficient. For cases where is very large compared to the embedding size , there are different options. If no additional information is available, then we could use the full tensor , instead of a diagonally constrained tensor. However, when very large, it is generally the case that utilizing additional information to describe the constraints would be very useful. In the next section, we propose a way to incorporate into such additional information for describing the constraint

, using a neural network.

3.2 as a nonlinear transformation

When the space of constraints is very large, using additional information to describe the constraint allows us to keep the complexity manageable. For example, in the case of movie recommendation, there may be rules such as "do not recommend horror movies to a child under 10 years old". This type of constraint can be described in a richer way than is possible with a binary vector encoding "not horror". Similarly, when the user can specify constraints based on several features (for example, color, size, brand), it is more efficient to use a richer representation, rather than simply the Cartesian product of the features.

We formalize this by a constraint feature map , which associates some additional information with the description of a constraint . We then define in a parametric way, for instance using a neural network (with parameters ). is therefore defined as:

and is fit in the joint optimization of (8). We refer to the resulting model as neural constraints for matrix factorization (NC-MF). It is particularly interesting to compare this model to the baseline NNMF, to determine if we manage to retain enough expressivity while sufficiently reducing the complexity of the network to avoid over-fitting. We perform this experimental comparison on the MovieLens dataset in Section 4.3. When comparing to the NNMF baseline it is preferable to use NC-NNMF, instead of NC-MF, to ensure a fairer comparison. The input features are the same for NNMF and NC-NNMF, although the features are used differently. In NC-NNMF the context features are used only for the transformation, while for NNMF they are used either as user input or item input features.

3.3 Extensions and Limitations

3.3.1 Constraints based on Multiple Features:

If we need to integrate several feature maps (for example, brand, color, and size for clothes) taking values in , we can define the feature map to take values in the Cartesian product of the definition spaces of the different features (which is still a finite set). In this case, constraints can be conjunctions (over the different features) of disjunctions (over the values of each features), which can be still defined as a point in the power set of . This directly fits our setting, and w.l.o.g, we can restrict ourselves to the case of one feature only taking a finite number of values for the sake of simplicity. However, the practitioner should instead choose the parameterization of with a neural network (NC-MF) to keep the complexity manageable.

3.3.2 Constraints based on Continuous Features:

Sometimes the constraints are defined from real-valued feature maps. In our experiments we choose to handle such continuous features by discretizing them, as treating these features as continuous often requires a model that is specific to the nature of the information represented by the feature (see Section 4 for further details).

4 Experiments

To validate the proposed approaches, we conduct experiments on one private dataset and two public datasets. We choose these three datasets as they illustrate different possible settings that fit within our framework. The settings present three different tasks to solve. For each task we will compare our methods (DC-MF and NC-MF) to baseline models (MF, CAMF-CI, NN-MF), and discuss the limitations of each model. Due to the small size of the data we use area under the curve (AUC) as a metric to evaluate the quality of the prediction, since per-user ranking metrics are not appropriate in this small-data setting.

4.1 High number of overlapping constraints

In this setting we assume that one item can be observed with different contextual constraints: given the observed item we observe multiple constraints such that

. Intuitively, some transfer learning between the constraints can be used to refine the recommendation. For popular constraints, the MF model is expected to achieve good results, since popular constraints correspond to average behavior. Given two popular contexts, we expect to observe, with high probability, a non-empty intersection in the item space, i.e.,

. For MF, at recommendation time there will be a low probability that the best items selected, which are compatible with or , will reside in the tail of the predictive distribution.

To better model the behavior of the constraint distribution in the tail, an alternative model to use is contextual matrix factorization, where the context is presented as a translation term (CAMF-CI). Using the context will refine the recommendation, and we therefore expect better results than with MF. As the CAMF-CI model is still crude for combinatorial constraints (since it cannot perform transfer learning from one constraint to another), it is still hard to learn on non-popular contexts. At recommendation time, when two features are rarely set together, the prediction using the learned weights will be almost random, since the learned model didn’t observe this particular combination of features.

In contrast, our models allow some transfer learning from one constraint to another , when . This allows our models to generalize better in the tail of the constraint distribution. We show that our model has more expressivity than CAMF-CI, and more effectively handles the combinatorial nature of . These new methods are more adapted to the sparsity of the data in our framework. In this experiment, we do not consider neural-network-based methods, since the dataset used (from Foursquare) is too small. Instead, we compare neural network methods in Sec. 4.3.

To better learn the transformation

in the DC-MF model, we use a warm start heuristic to initialize the

tensor. Because of its diagonal structure, we represent with a -by- matrix. We initialize such that the columns and corresponding to values of the constraints that frequently co-occur, have nearly the same value. Therefore, we initialize by minimizing:

(9)

We use this warm start heuristic to facilitate transfer learning when training the model.

Foursquare dataset

This Foursquare NYC check-in Dataset contains check-ins in New York city. Each check-in is associated with a time stamp, GPS coordinates, and a category for the venue. We leverage this dataset to study recommendations under time-based contextual constraints. We use only information about the user id, restaurant id, and check-in time in our experiments. We assume the following:

  • A user only checks in to restaurants that he likes. Therefore, we observe only positives for this dataset.

  • A user cannot visit a restaurant that is not open.

  • A user visits different restaurants according to the type of meal that she is looking for (breakfast, lunch, or dinner).

Both the contextual constraints of opening hours and serving a particular meal type are time dependent. We can therefore assume that the check-in time represents a contextual constraint. We discretize time in order to represent it with binary features. As the exact constraints are approximated by the check-in time, we relax for each restaurant to cover one hour (corresponding to five values in the discretization) around the observed check-in time. By construction, each restaurant visited by the user at check-in time ensures . We report results computed on a subset of the data: users, restaurants, and check-in time buckets. For this setting we study the performance of linear models. The results are averaged over model instances, where each instance is initialized using a different random seed. We set the embedding size to for all models. An alternating optimization is performed when training all models. Each training iteration represented in the figures corresponds to optimization steps for items variables, followed by steps for the users (with the possibility that the optimizer may perform early stopping). iteration steps are performed for the context variables in the context-aware models, including our models. We perform a warm start for the contextual-constraint-based methods for the initialization of the context vectors. The cost of this operation is similar to one learning iteration, approximately steps. We report the AUC results computed over the full test set in Fig. 2. To compute AUC, we need negative examples. We create negatives that depend on the check-in time: if a restaurant check-in is never observed at dinner time, then we assume that this restaurant shouldn’t be recommended at dinner time. We then select random restaurants, which we associate with random users that have never visited the same check-in time bucket.

Figure 2: Comparing results for linear models evaluated on the Foursquare dataset. Results show AUC on the global test set.

The variance of both weighted and diagonal models is small due to the warm start initialization. As expected, the contextual methods outperform the conventional MF model. We observe that the weighted model (WC-MF) and CAMF-CI give similar results. This is not surprising, as both models have similar expressivity; CAMF-CI considers the feature correlation as a translation, while WC-MF considers the feature as a multiplicative weight. Data enlargement is commonly used with MF models to achieve better performance ; therefore, we perform negative sampling based on the observed data, where negatives are sampled according to the check-in times (similar to AUC negative sampling logic we use). As we are using a negative sampling strategy for training that is similar the negative sampling strategy for our evaluation task, we expect very good results for this model configuration. The data enlargement model results in predictive performance that is similar to context-based models. Our new diagonal transformation method outperforms all of the linear baseline models. We seek to better understand the behaviour of our new model: is this performance improvement explained by an improvement on the similarities of popular contexts, or by better recommendations for rare contexts? We attempt to answer this question by reporting AUC results computed for three different contexts, as shown in Figures 3,  4,  5). In each figure, we set 3 different check-in time contexts: between , and . AUC is computed using the subset of the training set that contains restaurants observed under the specified constraint. The first two contexts are more rare compared to the third context (evening).

Figure 3: Reported AUC for Foursquare data for a rare context; the check-in time is between 8am and 9am.
Figure 4: Reported AUC for Foursquare data for a rare context; the check-in time is between 12pm and 1pm.
Figure 5: Reported AUC for Foursquare data for a popular context; the check-in time is between 10pm and 11pm.

The results for the two rare context confirm that the diagonal model performs better for non-popular contexts, while its performance is similar to the baselines in the case of the popular context.

4.2 Low number of overlapping constraints

We are now interested in observing whether our methods still work when the number of overlapping constraints is low, i.e., when sampling with high probability two constraints, and , from the underlying distribution, . In this setting, we cannot perform the same type of warm start as done previously. In the previous setting, warm start allows the global model to more effectively perform transfer learning using overlapping features. In the setting described here, the similarities between features are more difficult to recover, since the features almost never overlap, and thus a different warm start scheme is required. In this setting, we leverage the fact that users have historically chosen multiple filters over time, and we assume that the similarity between two contexts is proportional to the number of users that specify both contexts. Similar to the context distribution, the similarity distribution of observed contexts is stable for the train and test sets. In this setting, our new models will tend to learn orthogonal transformations, and won’t capture similarity based on user behaviour. We add a regularization parameter on (again we represent with a -by- matrix) given by

(10)

For this model we perform a warm start by initializing to the minimum of the regularizer (10).

Private Criteo dataset: product recommendation under brands constraints

This private dataset has been extracted from a retailer dataset. The recommendation is constrained by filters set by the user. These filters are strict; nothing is displayed if a filter is not satisfied. For this dataset we are confident about the constraints, which represent brand filters. One user can set multiple filters. We observe that our dataset is very sparse, since filters are rarely set together. The reward is more explicit, as we collect information about clicks and non-clicks. We run experiments on a subset of the observed data: users, items, and constraint features that correspond to brands. We set the embedding size to for all models. The positives and negative samples are used to train all models. Due to the imbalance of the observed clicked and non-clicked data, we re-weight the positives and and negatives in order to provide reasonable performance for the MF model. We perform one warm start iteration for our new model. We train the models with different random seeds, and report the results in Fig. 6.

Figure 6: AUC results computed over the global test dataset for the private dataset.

Overall, the three models gives close to average performance results. DC-MF still gives slightly better results, as expected. With the regularization parameter the diagonal constrained model finds a better trade-off between the use of feature-based similarities and user behaviour similarities. Fig. 7 and 8 shown this trade-off. In Fig.8, DC-MF provides performance similar to the context aware model, while MF fails to learn non-contextual similarities. In Fig. 7, DC-MF maintains good performance, while the context-aware model fails. DC-MF is thus better to use for recommendation under the brand constraints in this setting, as we ensure lower-variance recommendations across the brands. These two figures also show a limitation of the diagonal model: when the overlap between the constraints is low, the diagonal model only outperforms the CAMF-CI model to a small extent.

Figure 7: Private dataset: Limits of context-aware models: AUC reported for contextual constraints that specifies multiple brands. The context-aware model can easily overfit on this small dataset when trying to extract contextual similarities. The regularized diagonal-constraint MF model manages to extract similarities from the data, without overfitting.
Figure 8: Private dataset: Limits of non-context-aware models: AUC reported for a contextual constraint that sets one brand filter, which is usually set with other brands in other contexts. Compared to the MF model, the context-aware and DC-MF models benefit from refining recommendations based on feature similarities.

4.3 Constraints with multiple features and avoiding the folding effect

In this setting we observe users that interact only with items in a restricted subset of contexts, where these subsets are disjoint: If we denote and as the two disjoint subsets of contexts, then and , , and . This typically corresponds to the situation where a sub-population of users does not see a large part of the set of items because of constraint restrictions, e.g., horror movies cannot be recommended to kids, and therefore kids won’t watch or rate these movies. This can result in bad behaviour, where the recommender system may accidentally learn a high similarity between kids and horror movies, due to lack of information. This folding effect was introduced in [23], where the model can learn similar vector representations for users and items that belong to disjoint sets of constraints. In this scenario, the model learns similarities that don’t exist in the data. The true similarity between the two disjoint contexts, and , is actually zero, since we never observe users setting both contexts. The spurious recommendation phenomenon induced by folding effects is observed when filtering is done after item scoring. Therefore, the probability of seeing the best products satisfying a particular context in the tail of the predictive distribution is high.

In scenarios which can result in folding, it is crucial to have additional metadata descriptors for users, items, and/or the constraints, in order to generalize well for cases that are not supported by the feedback distribution, as described above. Therefore, we use neural network based approaches for this experiment, since neural networks allow us to easily accommodate metadata into the model. The baseline is the NN-MF model described in (7), where the network takes as input both the user and user features for , and the item and item features for . We also introduce a slightly modified version of NC-MF: instead of using MF embeddings for and (see (8)), we use the neural network to learn user and items embeddings that take into account metadata descriptors, in order to provide a fair comparison with NN-MF. We refer to this modified version of NC-MF as NC-NN-MF.

MovieLens dataset

The MovieLens 100K dataset consists of users and movies. Each user has rated at least movies. This dataset also contains information about the user, such as age, and movie categories. We join the logs in the dataset in order to link user features, item features, and ratings. In this dataset we expect that kids cannot watch horror or "thriller" movies, and therefore cannot rate those movies. We expect that the learned models are capable of detecting that recommending thriller or horror movies to kids is considered to be a bad recommendation. To better study folding effects, we select a training set where the observed ratings on horror movies are disjoint from the other movie categories. We then select two test sets that are subject to folding effects:

  • Horror movies: user and items in this category are disjoint with the other categories.

  • Thriller movies: user and item in this category are disjoint with the other categories for kids only.

We compute the AUC over both test sets by adding explicit negatives for kids to only the test set. We would like the model to recommend for kids (age ), and a positive value for adults (age ). The models are run using random seeds.

We compare the results of the NC-MF model to a normal MF model. As the new models will learn orthogonal representations for the two different disjoint contexts, folding effects are reduced. In Fig. 9, we observe a lower risk of a low AUC value () for horror movies. Both MF and NC-MF perform well on the thriller dataset, shown in Fig. 10, as it is easy to detect that the movies rated by kids are different from those rated by adults.

Figure 9: MovieLens: AUC on the horror movie test set. The models are learned based on user ids and item ids. For our new models we add the three context features: a binary indicator for a thriller movie, a binary indicator for a horror movie, and the age of the user.
Figure 10: MovieLens: AUC on the thriller movie test set. The models are learned based on user ids and item ids. For our new models we add the three context features: a binary indicator for a thriller movie, a binary indicator for a horror movie, and the age of the user.

Although we expect NN-MF to provide lower performance due to the size and sparsity of the dataset, we report results comparing NC-NN-MF compared to NN-MF. As described above, NC-NN-MF is one of our new constraint models, where we apply a neural constraint transformation on top of a neural net matrix factorization. As in NC-NN-MF, we explicitly define the hidden structure to learn; the model easily captures the relationship between the user age and item category. In Fig. 11, we see that NN-MF is capable of extracting the correct hidden structure for some random seeds, but fails for other seeds, leading to high variance in the predictions. By limiting the combination of features in NC-NN-MF so that we explicitly provide the features that defines the neural constraint, we restrict the expressivity of the model. Thus, for non-disjoint categories of items, such as thriller movies, we for some seeds NN-MF outperforms NC-NN-MF, as shown in Fig. 12.

Figure 11: MovieLens: AUC on the horror movie test set for the NN-MF models. The neural net takes as input user and item features, along with the user and item ids. For the context based model (NC-NN-MF), we use the three contextual features, and we remove user age from the neural net input, so as prevent use of this feature twice.
Figure 12: MovieLens: AUC on the thriller movie test set for the NN-MF models. The neural net takes as input user and item features, along with the user and item ids. For the context based model (NC-NN-MF), we use the three contextual features, and we remove user age from the neural net input, so as prevent use of this feature twice.

5 Conclusion and Future Work

Real-word recommender system are often constrained by partially observable or recommendable item-user pairs. Any additional information about the user’s interest may help to refine the recommendation and improve generalization on unobserved user-item pairs. For some recommender system settings, the user can interact with the system to refine recommendations by using filters. This information is rarely incorporated when training the recommendation model. Due to the sparsity of the observed data and the combinatorial nature of the set of constraints, incorporating contextual constraint information can be a difficult task, and for some settings non-contextual models better recommendations than context-aware models. We propose a new framework that describes recommendation under contextual constraints, where the observations are very sparse. We then present new methods that incorporate the new information provided by these constraints in different ways. Our new models learn a constraint representation as a similarity measure between user and item embeddings. This representation allows to increase the expressivity of the model without substantially increasing the number of model parameters. We perform experiments on three datasets that involve several challenging constraint-based settings. We describe an adaptation of the model to solve each of these tasks by using a warm start and/or regularization scheme appropriate to each setting. Our new model achieves better performance than the baseline models we consider for each setting. A promising direction for future work would be to incorporate recently proposed deep models for learning on sets [24], which would allow us to have a more elaborate representation of the constraints.

References

  • [1] George Loewenstein, Daniel Read, and Roy F Baumeister. Time and decision: Economic and psychological perspectives of intertemporal choice. Russell Sage Foundation, 2003.
  • [2] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30–37, 2009.
  • [3] Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In NIPS, pages 1257–1264, 2008.
  • [4] Ruslan Salakhutdinov and Andriy Mnih.

    Bayesian probabilistic matrix factorization using markov chain monte carlo.

    In ICML, pages 880–887. ACM, 2008.
  • [5] Mohsen Jamali and Martin Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In RecSys, pages 135–142. ACM, 2010.
  • [6] Hao Ma, Haixuan Yang, Michael R Lyu, and Irwin King. Sorec: social recommendation using probabilistic matrix factorization. In CIKM, pages 931–940. ACM, 2008.
  • [7] Prem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchical poisson factorization. In UAI, pages 326–335, 2015.
  • [8] Allison JB Chaney, David M Blei, and Tina Eliassi-Rad. A probabilistic model for using social networks in personalized item recommendation. In RecSys, pages 43–50. ACM, 2015.
  • [9] Ulrich Paquet and Noam Koenigstein. One-class collaborative filtering with random graphs. In WWW, pages 999–1008. ACM, 2013.
  • [10] David H Stern, Ralf Herbrich, and Thore Graepel. Matchbox: large scale online bayesian recommendations. In WWW, pages 111–120. ACM, 2009.
  • [11] Maciej Kula. Metadata embeddings for user and item cold-start recommendations. arXiv preprint arXiv:1507.08439, 2015.
  • [12] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, pages 263–272. Ieee, 2008.
  • [13] Jasson DM Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative prediction. In ICML, pages 713–719. ACM, 2005.
  • [14] Nathan Srebro, Jason Rennie, and Tommi S Jaakkola. Maximum-margin matrix factorization. In Advances in neural information processing systems, pages 1329–1336, 2005.
  • [15] Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. One-class collaborative filtering. In ICDM, pages 502–511. IEEE, 2008.
  • [16] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461. AUAI Press, 2009.
  • [17] Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, and Lars Schmidt-Thieme. Personalized ranking for non-uniformly sampled items. In Proceedings of KDD Cup 2011, pages 231–247, 2012.
  • [18] Rong Pan and Martin Scholz. Mind the gaps: weighting the unknown in large-scale one-class collaborative filtering. In KDD, pages 667–676. ACM, 2009.
  • [19] Xiaoyuan Su and Taghi M Khoshgoftaar. A survey of collaborative filtering techniques.

    Advances in artificial intelligence

    , 2009, 2009.
  • [20] Linas Baltrunas, Bernd Ludwig, and Francesco Ricci. Matrix factorization techniques for context aware recommendation. In Proceedings of the fifth ACM conference on Recommender systems, pages 301–304. ACM, 2011.
  • [21] Alexandros Karatzoglou, Xavier Amatriain, Linas Baltrunas, and Nuria Oliver. Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Proceedings of the fourth ACM conference on Recommender systems, pages 79–86. ACM, 2010.
  • [22] Maksims Volkovs, Guangwei Yu, and Tomi Poutanen. Dropoutnet: Addressing cold start in recommender systems. In Advances in Neural Information Processing Systems, pages 4957–4966, 2017.
  • [23] Doris Xin, Nicolas Mayoraz, Hubert Pham, Karthik Lakshmanan, and John R Anderson. Folding: Why good models sometimes make spurious recommendations. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pages 201–209. ACM, 2017.
  • [24] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep Sets. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3391–3401. Curran Associates, Inc., 2017.