Feature-based factorized Bilinear Similarity Model for Cold-Start Top-n Item Recommendation

04/22/2019 ∙ by Mohit Sharma, et al. ∙ 0

Recommending new items to existing users has remained a challenging problem due to absence of user's past preferences for these items. The user personalized non-collaborative methods based on item features can be used to address this item cold-start problem. These methods rely on similarities between the target item and user's previous preferred items. While computing similarities based on item features, these methods overlook the interactions among the features of the items and consider them independently. Modeling interactions among features can be helpful as some features, when considered together, provide a stronger signal on the relevance of an item when compared to case where features are considered independently. To address this important issue, in this work we introduce the Feature-based factorized Bilinear Similarity Model (FBSM), which learns factorized bilinear similarity model for TOP-n recommendation of new items, given the information about items preferred by users in past as well as the features of these items. We carry out extensive empirical evaluations on benchmark datasets, and we find that the proposed FBSM approach improves upon traditional non-collaborative methods in terms of recommendation performance. Moreover, the proposed approach also learns insightful interactions among item features from data, which lead to deep understanding on how these interactions contribute to personalized recommendation.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Top- recommender systems are used to identify from a large pool of items those

items that are the most relevant to a user and have become an essential personalization and information filtering technology. They rely on the historical preferences that were either explicitly or implicitly provided for the items and typically employ various machine learning methods to build

content-agnostic predictive models from these preferences. However, when new items are introduced into the system, these approaches cannot be used to compute personalized recommendations, because there are no prior preferences associated with those items. As a result, the methods used to recommend new items, referred to as (item) cold-start recommender systems, in addition to the historical information, take into account the characteristics of the items being recommended; that is, they are content aware. The items’ characteristics are typically captured by a set of domain-specific features. For example, a movie may have features like genre, actors, and plot keywords; a book typically has features like content description and author information. These item features are intrinsic to the item and as such they do not depend on historical preferences.

Over the years, a number of approaches have been developed towards solving the item cold-start problem  [7, 1, 17] that exploit the features of the new items and the features of the items on which a user has previously expressed his interest. A recently introduced approach, which was shown to outperform other approaches is the User-specific Feature-based Similarity Models (UFSM) [6]

. In this approach, a linear similarity function is estimated for each user that depends entirely on features of the items previously liked by the user, which is then used to compute a score indicating how relevant a new item will be for that user. In order to leverage information across users (i.e., the transfer learning component that is a key component of collaborative filtering), each user specific similarity function is computed as a linear combination of a small number of

global linear similarity functions that are shared across users. Moreover, due to the way that it computes the preference scores, it can achieve a high-degree of personalization while using only a very small number of global linear similarity functions.

In this work we extend UFSM in order to account for interactions between the different item features. We believe that such interactions are important and quite common. For example, in an e-commerce website, the items that users tend to buy are often designed to go well with previously purchased items (e.g., a pair of shoes that goes well with a dress). The set of features describing items of different type will be different (e.g., shoe material and fabric color) and as such a linear model can not learn from the data that for example a user prefers to wear leather shoes with black clothes. Being able to model such dependencies can lead to item cold-start recommendation algorithms that achieve better performance.

Towards this goal, we present a method called Feature-based factorized Bilinear Similarity Model (FBSM) that uses bilinear model to capture pairwise dependencies between the features. Like UFSM, FBSM learns a similarity function for estimating the similarity between items based on their features. However, unlike UFSM’s linear global similarity function, FBSM’s similarity function is bilinear. A challenge associated with such bilinear models is that the number of parameters that needs to be estimated becomes quadratic on the dimensionality of the item’s feature space, which is problematic given the very sparse training data. FBSM

overcomes this challenge by assuming that the pairwise relations can be modeled as a combination of a linear component and a low rank component. The linear component allows it to capture the direct relations between the features whereas the low rank component allows it to capture the pairwise relations. The parameters of these models are estimated using stochastic gradient descent and a ranking loss function based on Bayesian Personalized Ranking (BPR) that optimizes the area under the receiver operating characteristic curve.

We performed extensive empirical studies to evaluate the performance of the proposed FBSM on a variety benchmark datasets and compared it against state-of- the-art models for cold-start recommendation, including latent factor methods and non-collaborative user-personalized models. In our results FBSM optimized using BPR loss function outperformed other methods in terms of recommendation quality.

2 Notations and Definitions

Throughout the paper, all vectors are column vectors and are represented by bold lowercase letters (e.g.,

). Matrices are represented by upper case letters (e.g., ).

The historical preference information is represented by a preference matrix . Each row in corresponds to a user and each column corresponds to an item. The entries of are binary, reflecting user preferences on items. The preference given by user for item is represented by entry in . The symbol represents the score predicted by the model for the actual preference .

Sets are represented with calligraphic letters. The set of users has size , and the set of items has a size . represents the set of items that user liked (i.e., ). represents the set of items that user did not like or did not provide feedback for (i.e., ).

Each item has a feature vector that represents intrinsic characteristics of that item. The feature vectors of all items are represented as the matrix whose columns correspond to the item feature vectors. The total number of item features is .

The objective of the Top- recommendation problem is to identify among the items that the user has not previously seen, the items that he/she will like.

3 Related Work

The prior work to address the cold-start item recommendation can be divided into non-collaborative user personalized models and collaborative models. The non-collaborative models generate recommendations using only the user’s past interaction history and the collaborative models combine information from the preferences of different users.

Billsus and Pazzani [3]

developed one of the first user-modeling approaches to identify relevant new items. In this approach they used the users’ past preferences to build user-specific models to classify new items as either “relevant” or “irrelevant”. The user models were built using item features e.g., lexical word features for articles. Personalized user models 

[12] were also used to classify news feeds by modeling short-term user needs using text-based features of items that were recently viewed by user and long-term needs were modeled using news topics/categories. Banos [2] used topic taxonomies and synonyms to build high-accuracy content-based user models.

Recently collaborative filtering techniques using latent factor models have been used to address cold start item recommendation problems. These techniques incorporate item features in their factorization techniques. Regression-based latent factor models (RLFM) [1] is a general technique that can also work in item cold-start scenarios. RLFM learns a latent factor representation of the preference matrix in which item features are transformed into a low dimensional space using regression. This mapping can be used to obtain a low dimensional representation of the cold-start items. User’s preference on a new item is estimated by a dot product of corresponding low dimensional representations. The RLFM model was further improved by applying more flexible regression models [17]. AFM [7] learns item attributes to latent feature mapping by learning a factorization of the preference matrix into user and item latent factors . A mapping function is then learned to transform item attributes to a latent feature representation i.e., where represents items’ attributes and transforms the items’ attributes to their latent feature representation.

User-specific Feature-based Similarity Models (UFSM[6] learns a personalized user model by using historical preferences from all users across the dataset. In this model for each user an item similarity function is learned, which is a linear combination of user-independent similarity functions known as global similarity functions. Along with these global similarity functions, for each user a personalized linear combination of these global similarity functions is learned. It is shown to outperform both RLFM and AFM methods in cold-start Top- item recommendations.

Predictive bilinear regression models [5] belong to the feature-based machine learning approach to handle the cold-start scenario for both users and items. Bilinear models can be derived from Tucker family [15]. They have been applied to separate “style” and “content” in images[14], to match search queries and documents [16], to perform semi-infinite stream analysis [13], and etc. Bilinear regression models try to exploit the correlation between user and item features by capturing the effect of pairwise associations between them. Let denotes features for user and denotes features for item , and a parametric bilinear indicator of the interaction between them is given by where denotes the matrix that describes a linear projection from the user feature space onto the item feature space. The method was developed for recommending cold-start items in the real time scenario, where the item space is small but dynamic with temporal characteristics. In another work [9], authors proposed to use a pairwise loss function in the regression framework to learn the matrix , which can be applied to scenario where the item space is static but large, and we need a ranked list of items.

4 Feature-based Similarity Model

In this section we firstly introduce the feature-based linear model, analyzing the drawbacks of the model, and finally elaborate the technical details of our bilinear similarity model.

4.1 Linear Similarity Models.

In UFSM [6] the preference score for new item for user is given by

where is the user-specific similarity function given by

where is the global similarity function, is the number of global similarity functions, and is a scalar that determines how much the global similarity function contributes to ’s similarity function.

The similarity between two items and under the global similarity function is estimated as

where is the element-wise Hadamard product operator, and are the feature vectors of items and , respectively, and is a vector of length with each entry holding the weight of feature under the global similarity function . This weight reflects the contribution of feature in the item-item similarity estimated under . Note that is a linear model on the feature vector resulting by the Hadamard product.

In author’s results [6] for datasets with large number of features only one global similarity function was sufficient to outperform AFM and RLFM method for Top- item cold-start recommendations. In case of only one global similarity function the user-specific similarity function is reduced to single global similarity function. Estimated preference score for new item for user is given by

where is the parameter vector, which can be estimated from training data using different loss functions.

4.2 Factorized Bilinear Similarity Models.

An advantage that the linear similarity method(UFSM) has, over state of art methods such as RLFM and AFM, is that it uses information from all users across dataset to estimate the parameter vector . As in the principle of collaborative filtering, there exists users who have similar/dissimilar tastes and thus being able to use information from other users can improve recommendation for a user. However, we notice that a major drawback of this model is that it fails to discover pattern affinities between item features. Capturing these correlations among features sometimes can lead to significant improvements in estimating preference scores.

We thus propose FBSM to overcome this drawback: It uses bilinear formulation to capture correlation among item features. Similar to UFSM, it considers information from all the users in dataset to learn these bilinear weights. In FBSM, the preference score for a new item for user is given by

(1)

where is the similarity function given by

where is the weight matrix which captures correlation among item features. Diagonal of matrix determines how well a feature of item say feature of i.e., interacts with corresponding feature of item i.e., while off-diagonal elements of gives the contribution to similarity by interaction of item feature with other features of item i.e., contribution of interaction between and where

. Cosine similarity can be reduced to our formulation where

is a diagonal matrix with all the elements as ones.

A key challenge in estimating the bilinear model matrix

is that the number of parameters that needs to be estimated is quadratic in the number of features used to describe the items. For low-dimensional datasets, this is not a major limitation; however, for high-dimensional datasets, sometimes sufficient training data is not present for reliable estimation. This can become computationally infeasible, and moreover, lead to poor generalization performance by overfitting the training data. In order to overcome this drawback, we need to limit the degree of freedom of the solution

, and we propose to represent as sum of diagonal weights and low-rank approximation of the off-diagonal weights:

(2)

where is a diagonal matrix of dimension equal to number of features whose diagonal is denoted using a vector , and is a matrix of rank . The columns of represent latent space of features i.e., represent latent factor of feature . Using the low-rank approximation, the parameter matrix of the similarity function is thus given by:

(3)

The second part of equation 3 captures the effect of off-diagonal elements of by inner product of latent factor of features. Since we are now estimating only diagonal weights and low-rank approximation of off- diagonal weights, the computation reduces significantly compared to when we were trying to estimate the complete matrix . This also gives us a flexible model where we can regularize diagonal weights and feature latent factors separately.

The bilinear model may look similar to the formulation described in [5], and however the two are very different in nature: in [5] the bilinear model is used to capture correlation among user and item features, on contrary the FBSM is trying to find correlation within features of items itself. The advantage of modeling interactions among item features is especially attractive when there is no explicit user features available. Note that it is not hard to encode the user features in the proposed bilinear model such that the similarity function is parameterized by user features, and we leave a detailed study to an extension of this paper.

4.3 Parameter Estimation of Fbsm.

FBSM is parameterized by , where are the parameters of the similarity function. The inputs to the learning process are: (i) the preference matrix , (ii) the item-feature matrix , and (iii) the dimension of latent factor of features. There are many loss functions we can choose to estimate , among which the Bayesian Personalized Ranking (BPR) loss function [11] is designed especially for ranking problems. In the Top- recommender systems, the predicted preference scores are used to rank the items in order to select the highest scoring items, and thus the BPR loss function can better model the problem than other loss functions such as least squares loss and in general lead to better empirical performance [7, 11]. As such, in this paper, we propose to use the BPR loss function, and in this section we show how the loss function can be used to estimate the parameters . Note that other loss functions such as least squared loss can be applied similarity.

We denote the problem of solving FBSM using BPR as FBSM, and the loss function is given by

(4)

where, is the predicted value of the user ’s preference for the item and

is the sigmoid function. The BPR loss function tries to learn item preference scores such that the items that a user likes have higher preference scores than the ones he/she does not like, regardless of the actual item preference scores. The prediction value

is given by:

(5)

which is identical to Equation 1 except that item is excluded from the summation. This is done to ensure that the variable being estimated (the dependent variable) is not used during the estimation as an independent variable as well. We refer to this as the Estimation Constraint [8].

To this end, the model parameters are estimated via an optimization process of the form:

(6)

where we penalize the frobenius norm of the model parameters in order to control the model complexity and improve its generalizability.

To optimized Eq. (6) we proposed to use stochastic gradient descent(SGD) [4], in order to handle large-scale datasets. The update steps for are based on triplets sampled from training data. For each triplet, we need to compute the corresponding estimated relative rank . Let

the updates are then given by:

(7)
(8)

4.4 Performance optimizations

In our approach, the direct computation of gradients is time-consuming and is prohibitive when we have high-dimensional item features. For example, the relative rank given by

(9)

has complexity of , where is the dimensionality of latent factors, is the number of features.

To efficiently compute these, let

which can be precomputed once for all users.

Then, Equation 9 becomes

where .

The complexity of computing the relative rank then becomes , which is lower than complexity of Equation 9.

The gradient of the diagonal component is given by

(10)

where represents elementwise scalar product. The complexity of Equation 10 is given by .

The gradient of the low rank component is given by

(11)

whose complexity is .

Hence, the complexity of gradient computation for all the parameters is given by . We were able to obtain both the estimated relative rank and all the gradients in , which is linear with respect to feature dimensionality as well as the size of latent factors and the number of global similarity functions. This allows the to process large-scale datasets.

1:procedure FBSM _Learn
2:      V regularization weight
3:      D regularization weight
4:      D and V learning rates
5:     Initialize randomly
6:     
7:     while not converged do
8:         for  each user  do
9:              sample a pair () s.t. ,
10:              compute
11:              compute
12:              compute
13:              update using (7)
14:              update using (8)
15:         end for
16:     end while
17:     
18:     return
19:end procedure
Algorithm 1 FBSM-Learn

We note that the proposed FBSM method is closely related to the factorization machine (FM) [10], in that both are exploring the interactions among the features. However, there is one key difference between these two: while the FM is heavily dependent on the quality of the user features, the proposed method does not depend on such user features.

5 Experimental Evaluation

In this section we perform experiments to demonstrate the effectiveness of the proposed algorithm.

5.1 Datasets

We used four datasets (Amazon Books, MovieLens-IMDB, CiteULike, Book Crossing) to evaluate the performance of FBSM.

Amazon Books(AMAZON) is a dataset collected from amazon about best-selling books and and their ratings. The ratings are binarized by treating all ratings greater than equal to

as and ratings below as . Each is accompanied with a description which was used as item’s content.

CiteULike(CUL)111http://citeulike.org/ aids researchers by allowing them to add scientific articles to their libraries. For users of the CUL, the articles present in their library are considered as preferred articles i.e. in a preference matrix while rest are considered as implicit preferences.

MovieLens-IMDB (ML-IMDB) is a dataset extracted from the IMDB and the MovieLens-1M datasets222http://www.movielens.org, http://www.imdb.com by mapping the MovieLens and IMDB movie IDs and collecting the movies that have plots and keywords. The ratings were binarized similar to AMAZON by treating all ratings greater than as and below or equal to as 0. The movies plots and keywords were used as the item’s content. Book Crossing (BX) dataset is extracted from Book Crossing data [18] such that user has given at least four ratings and each book has received the same amount of ratings. Description of these books were collected from Amazon using ISBN and were used as item features.

For the AMAZON, CUL, BX and ML-IMDB datasets, the words that appear in the item descriptions were collected, stop words were removed and the remaining words were stemmed to generate the terms that were used as the item features. All words that appear in less than 20 items and all words that appear in more than 20% of the items were omitted. The remaining words were represented with TF-IDF scores.

Various statistics about these datasets are shown in Table 1. Most of these datasets contain items that have high-dimensional feature spaces. Also comparing the densities of the datasets we can see that the MovieLens dataset have significantly higher density than other dataset.

Dataset # users # items # features # preferences # prefs/user # prefs/item density
  CUL 3,272 21,508 6,359 180,622 55.2 8.4 0.13%
  BX 17,219 36,546 8,946 574,127 33.3 15.7 0.09%
  AMAZON 13,097 11,077 5,766 175,612 13.4 15.9 0.12%
  ML-IMDB 2,113 8,645 8,744 739,973 350.2 85.6 4.05%
Table 1: Statistics for the datasets used for testing

5.2 Comparison methods

We compared FBSM against non-collaborative personalized user modeling methods and collaborative methods.

  1. Non-Collaborative Personalized User Modeling Methods Following method is quite similar to method described in [3]

    • Cosine-Similarity (CoSim): This is a personalized user-modeling method. The preference score of user on target item is estimated using equation 5 by using cosine similarity between item features.

  2. Collaborative Methods

    • User-specific Feature-based Similarity Models (UFSM): As mentioned before, this method [6] learns personalized user model by using all past preferences from users across the dataset. It outperformed other state of the art collaborative latent factor based methods e.g., RLFM[1], AFM[7] by significant margin.

    • RLFMI: We used the Regression-based Latent Factor Modeling(RLFM) technique implemented in factorization machine library LibFM[10] that accounts for inter-feature interactions. We used LibFM with SGD learning to obtain RLFMI results.


Method
CUL BX
Params Rec@10 DCG@10 Params Rec@10 DCG@10
  CoSim - 0.1791 0.0684 - 0.0681 0.0119
  RLFMI h=75 0.0874 0.0424 h=75 0.0111 0.003
  UFSM =1, =0.25 0.2017 0.0791 =1, =0.1 0.0774 0.0148
  FBSM =0.25, =10, h=5 0.2026 0.0792 =1, =100, h=1 0.0776 0.0148



Method
ML-IMDB AMAZON
Params Rec@10 DCG@10 Params Rec@10 DCG@10
  CoSim - 0.0525 0.1282 - 0.1205 0.0228
  RLFMI h = 15 0.0155 0.0455 h = 30 0.0394 0.0076
  UFSM =1, =0.005 0.0937 0.216 =1, =0.25, 0.1376 0.0282
  FBSM =0.01, =0.1, h=5 0.0964 0.227 =0.1, =1, h=1 0.1392 0.0284

  • The “Params” column shows the main parameters for each method. For UFSM , is the number of similarity functions, and , is the regularization parameter. For and

    are regularization parameters and h is dimension of feature latent factors. The “Rec@10” and “DCG@10” columns show the values obtained for these evaluation metrics. The entries that are underlined represent the best performance obtained for each dataset.

Table 2: Performance of FBSM and Other Techniques on the Different Datasets

5.3 Evaluation Methodology and Metrics

We evaluated performance of methods using the following procedure. For each dataset we split the corresponding user-item preference matrix into three matrices , and . contains a randomly selected of the columns (items) of R, and the remaining columns were divided equally among and . Since items in and are not present in , this allows us to evaluate the methods for item cold-start problems as users in do not have any preferences for items in or . The models are learned using and the best model is selected based on its performance on the validation set . The selected model is then used to estimate the preferences over all items in . For each user the items are sorted in decreasing order and the first items are returned as the Top- recommendations for each user. The evaluation metrics as described later are computed using these Top- recommendation for each user.

After creating the train, validation and test split, there might be some users who do not have any items in validation or test split. In that case we evaluate performance on the splits for only those user who have at least one item in corresponding test split. This split-train-evaluate procedure is repeated three times for each dataset and evaluation metric scores are averaged over three runs before being reported in results.

We used two metrics to assess the performance of the various methods: Recall at (Rec@n) and Discounted Cumulative Gain at (DCG@n). Given the list of the Top- recommended items for user , Rec@ measures how many of the items liked by appeared in that list, whereas the DCG@ measures how high the relevant items were placed in the list. The Rec@ is defined as

The DCG@ is defined as

where the importance score of the item with rank in the Top- list is

The main difference between Rec@ and DCG@ is that DCG@ is sensitive to the rank of the items in the Top- list. Both the Rec@ and the DCG@ are computed for each user and then averaged over all the users.

5.4 Model Training

FBSM’s model parameters are estimated using training set and validation set . After each major SGD iteration of Algorithm 1 we compute the Rec@ on validation set and save the current model if current Rec@ is better than those computed in previous iterations. The learning process ends when the optimization objective converges or no further improvement in validation recall is observed for major SGD iterations. At the end of learning process we return the model that achieved the best Rec@ on the validation set.

To estimate the model parameters of , we draw samples equal to the number of preferences in for each major SGD iteration. Each sample consists of a user, an item preferred by user and an item not preferred by user. If a dataset does not contain items not preferred by user then we sample from items for which his preference is not known.

For estimating RLFMI model parameters, LibFM was given the training and validation sets and the model that performed best on the validation set was used for evaluation on test sets. For RLFMI, the training set must contain both 0’s and 1’s. Since the CUL dataset does not contain both 0’s and 1’s, we sampled 0’s equal to number of 1’s in from the unknown values.

6 Results and Discussion

6.1 Comparison with previous methods

We compared the performance of FBSM with other methods described in Section on 5.2. Results are shown in Table 2 for different datasets. We tried different values for various parameters e.g., latent factors and regularization parameters associated with methods and report the best results found across datasets.

These results illustrate that by modeling the cross feature interactions among items can improve upon the UFSM method [6] which has been shown to outperform the existing state of the art methods like RLFM[1] and AFM[7]. Similar to the UFSM method, method has outperformed latent-factor based RLFMI method. An example of cross-feature interactions found by FBSM is interaction among terms tragic, blockbuster, and famous.

6.2 Performance investigation at user level

We further looked at some of our datasets ( and ) and divided the users based on the performance achieved by FBSM in comparison with UFSM i.e., users for which FBSM performed better, similar and worse than UFSM. These finding are presented in Table 3. For ML-IMDB dataset there is an increase of in number of users for whom recommendation is better on using FBSM method, while for dataset the number of users that benefited from FBSM is not significant. On comparing the two datasets in Table 3, has much more preferences per item or existing items have been rated by more users compared to . Hence our proposed method FBSM takes the advantage of availability of more data while UFSM fails to do so.

Dataset FBSM against UFSM users items average user preferences average item preferences
ML-IMDB BETTER 887 4770 224 42
SAME 802 4371 119 22
WORSE 424 3928 137 15
AMAZON BETTER 325 4170 23 2
SAME 12458 6638 7 13
WORSE 314 4294 24 2
Table 3: User level investigation for ML-IMDB and AMAZON

7 Conclusion

We presented here FBSM for Top- recommendation in item cold-start scenario. It tries to learn a similarity function between items, represented by their features, by using all the information available across users and also tries to capture interaction between features by using a bilinear model. Computation complexity of bilinear model estimation is significantly reduced by modeling the similarity as sum of diagonal component and off-diagonal component. Off-diagonal component are further estimated as dot product of latent spaces of features.

In future, we want to investigate the effect of non-negativity constraint on model parameters and effectiveness of the method on actual rating prediction instead of Top- recommendation.

References

  • [1] Deepak Agarwal and Bee-Chung Chen. Regression-based latent factor models. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’09, pages 19–28. ACM, 2009.
  • [2] E. Banos, I. Katakis, N. Bassiliades, G. Tsoumakas, and I. Vlahavas. Personews: a personalized news reader enhanced by machine learning and semantic filtering. In Proceedings of the 2006 Confederated international conference on On the Move to Meaningful Internet Systems: CoopIS, DOA, GADA, and ODBASE - Volume Part I, ODBASE’06/OTM’06, pages 975–982, Berlin, Heidelberg, 2006. Springer-Verlag.
  • [3] Daniel Billsus and Michael J. Pazzani. A hybrid user model for news story classification. In Proceedings of the seventh international conference on User modeling, pages 99–108, 1999.
  • [4] Léon Bottou. Online algorithms and stochastic approximations. In David Saad, editor,

    Online Learning and Neural Networks

    . Cambridge University Press, Cambridge, UK, 1998.
  • [5] Wei Chu and Seung-Taek Park. Personalized recommendation on dynamic content using predictive bilinear models. In Proceedings of the 18th International Conference on World Wide Web, WWW ’09, pages 691–700, New York, NY, USA, 2009. ACM.
  • [6] Asmaa Elbadrawy and George Karypis. Feature-based similarity models for top-n recommendation of new items. Technical Report 14-016, Department of Computer Science, University of Minnesota, Minneapolis, Minnesota, June 2013.
  • [7] Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, Steffen Rendle, and Schmidt-Thie Lars. Learning attribute-to-feature mappings for cold-start recommendations. In Proceedings of the 2010 IEEE International Conference on Data Mining, ICDM ’10, pages 176–185, Washington, DC, USA, 2010. IEEE Computer Society.
  • [8] Santosh Kabbur, Xia Ning, and George Karypis. Fism: Factored item similarity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 659–667. ACM, 2013.
  • [9] Seung-Taek Park and Wei Chu. Pairwise preference regression for cold-start recommendation. In Proceedings of the Third ACM Conference on Recommender Systems, RecSys ’09, pages 21–28, New York, NY, USA, 2009. ACM.
  • [10] Steffen Rendle. Factorization machines with libFM. ACM Trans. Intell. Syst. Technol., 3(3):57:1–57:22, May 2012.
  • [11] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In

    Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence

    , UAI ’09, pages 452–461, Arlington, Virginia, United States, 2009. AUAI Press.
  • [12] Manuel de Buenaga Rodríguez, Manuel J. Maña López, Alberto Díaz Esteban, and Pablo Gervás Gómez-Navarro. A user model based on content analysis for the intelligent personalization of a news service. In Proceedings of the 8th International Conference on User Modeling 2001, UM ’01, pages 216–218, London, UK, UK, 2001. Springer-Verlag.
  • [13] Jimeng Sun, Dacheng Tao, and Christos Faloutsos.

    Beyond streams and graphs: Dynamic tensor analysis.

    In In KDD, pages 374–383, 2006.
  • [14] Joshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural Comput., 12(6):1247–1283, June 2000.
  • [15] LedyardR Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
  • [16] Wei Wu, Zhengdong Lu, and Hang Li. Learning bilinear model for matching queries and documents. J. Mach. Learn. Res., 14(1):2519–2548, January 2013.
  • [17] Liang Zhang, Deepak Agarwal, and Bee-Chung Chen. Generalizing matrix factorization through flexible regression priors. In Proceedings of the fifth ACM conference on Recommender systems, RecSys ’11, pages 13–20, New York, NY, USA, 2011. ACM.
  • [18] Cai-Nicolas Ziegler, Sean M. McNee, Joseph A. Konstan, and Georg Lausen. Improving recommendation lists through topic diversification. In Proceedings of the 14th International Conference on World Wide Web, WWW ’05, pages 22–32, New York, NY, USA, 2005. ACM.