LCMR: Local and Centralized Memories for Collaborative Filtering with Unstructured Text

04/17/2018 ∙ by Herbert Hu, et al. ∙ 0

Collaborative filtering (CF) is the key technique for recommender systems. Pure CF approaches exploit the user-item interaction data (e.g., clicks, likes, and views) only and suffer from the sparsity issue. Items are usually associated with content information such as unstructured text (e.g., abstracts of articles and reviews of products). CF can be extended to leverage text. In this paper, we develop a unified neural framework to exploit interaction data and content information seamlessly. The proposed framework, called LCMR, is based on memory networks and consists of local and centralized memories for exploiting content information and interaction data, respectively. By modeling content information as local memories, LCMR attentively learns what to exploit with the guidance of user-item interaction. On real-world datasets, LCMR shows better performance by comparing with various baselines in terms of the hit ratio and NDCG metrics. We further conduct analyses to understand how local and centralized memories work for the proposed framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recommender systems are widely used in various domains and e-commerce platforms, such as to help consumers buy products at Amazon, watch videos on Youtube, and read articles on Google News. Collaborative filtering (CF) is among the most effective approaches based on the simple intuition that if users rated items similarly in the past then they are likely to rate items similarly in the future [Sarwar et al.2001]. Matrix factorization (MF) techniques which can learn the latent factors for users and items are its main cornerstone [Mnih and Salakhutdinov2008, Koren et al.2009]

. Recently, neural networks like multilayer perceptron (MLP) are used to learn the interaction function from data 

[Dziugaite and Roy2015, He et al.2017]. MF and neural CF suffer from the data sparsity and cold-start issues.

Items are usually associated with content information such as unstructured text, like the news article and product reviews. These additional sources of information can alleviate the sparsity issue and are essential for recommendation beyond user-item interaction data. For application domains like recommending research papers and news articles, the unstructured text associated with the item is its text content [Wang and Blei2011, Bansal et al.2016]. Other domains like recommending products, the unstructured text associated with the item is its user reviews which justify the rating behavior [McAuley and Leskovec2013, Zheng et al.2017]. Topic modelling and neural networks have been proposed to exploit the item content leading to performance improvement.

These two research threads, i.e., pure CF approaches which exploit user-item interaction data and extended CF methods which integrate item content, are different perspectives of interaction and content information. On the one hand, a recent pure CF approach called latent relational metric learning model [Tay et al.2018] showed that user-item specific latent relations can be generated by a centralized memory module. The centralized (or global) memories are parameterized by a memory matrix shared by all user-item interaction data. This model is a pure CF approach. On the other hand, memory networks are widely used in question answering and reading comprehension [Weston et al.2016, Sukhbaatar et al.2015, Miller et al.2016]. The memories can be naturally used to model additional sources like item content. These memories are local (or dynamic) since they are specific to the input query. Based on local and centralized memories, there is a possibility of unifying these two research threads to exploit interaction data and content information seamlessly.

In this paper, we propose a novel neural framework to exploit interaction data and content information seamlessly from the centralized and local perspectives. The proposed framework, called LCMR, consists of local and centralized memories modules for exploiting content information and interaction data, respectively. By modeling content information as local memories, LCMR attentively learns what to exploit with the guidance of user-item interaction. The local and centralized memories are jointly trained under the end-to-end neural framework. Moreover, LCMR is a unified framework embracing pure CF and extended CF approaches.

2 The LCMR Framework

We first introduce notations used throughout the paper. For collaborative filtering with implicit feedback [Hu et al.2008, Pan et al.2008], there is a binary matrix to describe user-item interactions where each entry is 1 (called observed entries) if user has an interaction with item and 0 (unobserved) otherwise:

Denote the set of -sized users by and items by . Usually the interaction matrix is very sparse since a user only consumed a very small subset of all items. Similarly for the task of item recommendation, each user is only interested in identifying top- items. The items are ranked by their predicted scores:

(1)

where is the interaction function and model parameters.

For MF-based CF approaches, the interaction function

is fixed and computed by a dot product between the user and item vectors 

[Mnih and Salakhutdinov2008, Koren et al.2009]. For neural CF, neural networks are used to parameterize function and learn it from interaction data [Dziugaite and Roy2015, He et al.2017]:

(2)

where the input

is vertically concatenated from that of user and item embeddings, which are projections of their one-hot encodings

and by embedding matrices and , respectively (). The output and hidden layers are computed by and in a multilayer neural network.

Items are associated with unstructured text like abstracts of articles and reviews of products. For item (such as an article), denote the words in it as where the words come from a -sized vocabulary (usually ). Neural CF can be extended to leverage text and then the interaction function has the form of .

Based on neural architectures, it is flexible to enable us to exploit interaction data and content information seamlessly via centralized and local memories introduced in the following subsections (Section 2.2 and  2.3).

2.1 Architecture

Figure 1: The Proposed Architecture of Local and Centralized Memories Recommender (LCMR) Model. In this illustration, there is one block for each of the local and centralized memories modules and the the size of the memories is four.

Our contributions are summarized as follows:

  • Introducing an alternative approach to integrate item content via local memories;

  • Proposing a novel recommender system LCMR which exploits interaction data and content information seamlessly in an end-to-end neural framework;

  • Evaluating LCMR extensively on real-world datasets and conducting analyses to help understand the impact of local and centralized memories for LCMR.

The architecture for the proposed LCMR model is illustrated in Figure 1. In general, besides the layers of input, embedding, and output, LCMR is stacked by multiple building blocks to learn (highly nonlinear) interaction relationships between users and items. The building blocks consist of local and centralized memories. The information flow in LCMR goes from the input to the output through the following layers:

  • Input layer: This module encodes user-item interaction indices. We adopt the one-hot encoding. It takes user and item , and maps them into one-hot encodings and where only the element corresponding to that index is 1 and all others are 0.

  • Embedding lookup: This module embeds one-hot encodings into continuous representations and then concatenates them as

    (3)

    to be the input of following building blocks.

  • Multi-hop centralized blocks: This module is a pure CF approach to exploit user-item interaction data. It takes the continuous representations from the embedding module and then transforms to a final latent representation:

    (4)

    where denotes the computing function in the module.

  • Multi-hop local blocks: This module is an extended CF approach to integrate the item content with the guidance of interaction data. The item content is modelled by memories. It takes both representations from the embedding module and text associated with the item to a final latent representation:

    (5)

    where denotes the computing function in the module.

  • Output layer: This module predicts the score for the given user-item pair based on the concatenated representation:

    (6)

    from the multi-hop blocks. The output is the probability that the input pair is a positive interaction. This can be achieved by a softmax/logistic layer

    :

    (7)

    where is the parameter.

The centralized memory module is mainly to learn (highly) nonlinear representations from user-item interactions through multiple nonlinear transformations. The local memory module is mainly to learn text semantics from item content with the guidance of user-item interactions. A module consists of multi-hop blocks. In a unified model, LCMR achieves to learn both interaction representations and content semantics by jointly training centralized and local memory modules.

2.2 Centralized Memory Module: Exploiting Interactions

The basic idea of neural CF approaches is to learn the highly complex matching relationships between users and items from their interactions data by multiple non-linear transformations usually implemented by neural networks 

[Dziugaite and Roy2015, He et al.2017]. Inspired by the latent relational metric learning (LRML) model [Tay et al.2018] which showed that user-item specific latent relations can be generated by a centralized memory module, we similarly design a centralized memory module111The input to our centralized memory module is different from that in the LRML model. And our module contains multiple memory blocks while LRML has only one. to learn a latent representation from the joint user-item embedding: .

A centralized (or global) memories block (illustrated by a dotted rectangle box marked as ‘Centralized Block’ in Figure 1) is parameterized by a memory matrix and a key matrix , where is the size of the memories and is the dimensionality of embeddings, shared by all user-item interaction data. The first block takes the concatenated embeddings of users and items as its input, and then produces the transformed representations as its output which in turn is the input to the next block if there are multi-hops. For simplicity, we show the computing path in the module which has only one block in the following.

The computation path goes through . Firstly, we perform a content-based addressing mechanism between and (the -th row vector of ) by a similarity measure (a dot product) to produce unnormalized weights:

(8)

The unnormalized attentive weights are normalized to be a simplex vector, e.g. by a softmax function:

(9)

where the parameter has two functions. It can stabilize the numerical computation when the exponentials of the softmax function are very large, e.g. the dimension is too high [Vaswani et al.2017]. We can also use it to amplify or attenuate the precision of the attention [Graves et al.2014]. We set by scaling along with the dimensionality.

Secondly, we use the attentive weights to compute a weighted sum over memories as the output:

(10)

Usually there are multiple nonlinear transformations in a neural architecture. Each block has its own memory matrices and . The parameters in the centralized module are .

2.3 Local Memory Module: Integrating Text

Items are usually associated with the content information such as unstructured text (e.g., abstracts of articles and reviews of products). CF approaches can be extended to exploit the content information [Wang and Blei2011, Bansal et al.2016] and user reviews [McAuley and Leskovec2013, Zheng et al.2017]. Memory networks can reason with an external memory [Weston et al.2016, Sukhbaatar et al.2015]. Due to the capability of naturally learning word embeddings to address the problems of word sparseness and semantic gap, we design a local memory module to naturally model item content via memories. These memories are dynamic (or local) since they are specific to the input query and attentively learn what to exploit with the guidance of user-item interaction.

A local (or dynamic) memories block is illustrated by a dotted rectangle box marked as ‘Local Block’ in Figure 1 where the affiliated irregular rectangle box donotes the external memories containing words in item . For simplicity, in the following, we only show one block in the computing path.

Suppose we are given words (coming from a -sized vocabulary) in item as the input to be stored in the memory. The entire set of are converted into key vectors of dimension computed by embedding each in a continuous space using an embedding matrix (of size ). The query is the concatenated user-item embeddings . In the embedding space, we compute the match between query and each memory by taking the inner product followed by a softmax function, similar to Eq.(8) and Eq.(9) respectively. Each has a corresponding memory vector by another embedding matrix (of size ). The resulting representation is then a sum over memories weighted by attentive probabilities, similar to Eq.(10).

Usually there are multiple nonlinear transformations in a neural architecture. The embedding matrices and in each block are shared. The parameters in the local memories module are .

2.4 Optimization and Learning

Due to the nature of the implicit feedback and the task of item recommendation, the squared loss may be not suitable since it is usually for rating prediction. Instead, we adopt the binary cross-entropy loss:

(11)

where are the union of observed interaction matrix and randomly sampled negative pairs. This objective function has a probabilistic interpretation and is the negative logarithm likelihood of the following likelihood function:

(12)

where the model parameters are:

This objective function can be optimized by the stochastic gradient descent (SGD) algorithm and its variants.

Note that the local and centralized modules in LCMR are jointly training and there is a distinction from ensemble learning. In an ensemble, predictions of individual models are combined only at the inference process but not at the training process. In contrast, joint training optimizes all parameters simultaneously by taking both the local and centralized modules into account at the training process. Furthermore, the entire model is more compact by sharing the user and item embeddings.

3 Experiments

In this section, we conduct empirical study to answer the following questions: 1) how does the proposed LCMR model perform compared with state-of-the-art recommender systems; and 2) how do local and centralized memories contribute to the proposed framework. We firstly introduce the evaluation protocols and experimental settings, and then we compare the performance of different recommender systems. We further analyze the LCMR model to understand the impact of the two memories modules, followed by showing the optimization curves.

3.1 Experimental Settings

Datasets We conduct experiments on two datasets. The first dataset, CiteULike222http://www.cs.cmu.edu/~chongw/data/citeulike/, is widely used to evaluate the performance on scientific article recommendation [Wang and Blei2011]. The second dataset, Company Mobile, provided by a company is on the domain of news reading in the region of New York City, USA in one month (January 2017). The other information, such as dwell time, publisher, and demographic data, is not used in this paper. For CiteULike, we use the version released in the work [Wang and Blei2011], and the size of vocabulary is 8,000 and there are about 1.6M words. For Company Mobile dataset, we preprocess it following the work [Wang and Blei2011]. We removed users with fewer than 10 feedback. For each item, we use only the news title. We filter stop words and use tf-idf to choose the top 8,000 distinct words as the vocabulary. This yields a corpus of 0.6M words. The statistics of datasets are summarized in Table 1. Both datasets are sparser than 99%. Note that CiteULike is long text of paper abstracts (the number of average words per item is 93.5), while Company Mobile is short text of news titles (the number of average words per item is 6.7).

Dataset CiteULike Company Mobile
#Users 5,551 18,387
#Items 16,980 92,008
#Feedback 204,986 569,749
#Words 1,587,000 612,839
Rating Density (%) 0.218 0.034
Avg. Words per Item 93.5 6.7
Table 1: Datasets and Statistics.

Evaluation protocols For item recommendation task, the leave-one-out (LOO) evaluation is widely used and we follow the protocol in the work of neural collaborative filtering [He et al.2017]. That is, we reserve one interaction (usually the latest one or randomly picked if no temporal information) as the test item for each user. We follow the common strategy which randomly samples 99 (negative) items that are not interacted by the user and then evaluate how well the recommender can rank the test item against these negative ones. The performance is measured by Hit Ratio and Normalized Discounted Cumulative Gain (NDCG), where the ranked list is cut off at 10. The former metric measures whether the test item is present on the top-10 list and the latter also accounts for the hit position by giving higher reward for top ranks. Results are averaged over all test users. The higher the values, the better the performance.

Baselines We compare with various baselines. The first class methods are non-personalized. ItemPOP ranks items by their popularity, that is, the number of interacted users. The second class methods are pure CF. BPRMF, Bayesian personalized ranking [Rendle et al.2009]

, optimizes factorization with a pairwise ranking loss function rather than the pointwise as we did, which is tailored to learn from the implicit feedback. It is a state-of-the-art traditional CF technique.

MLP, multilayer perceptron [He et al.2017], learns the nonlinear interaction function using feedforward neural networks. The last class methods are extended CF. We compare with CTR, collaborative topic regression [Wang and Blei2011], combines MF and topic modeling. It is a state-of-the-art model which also exploits auxiliary text sources. Actually, the CiteULike dataset is introduced in the CTR paper.

Dataset Metric ItemPOP BPRMF MLP CTR LCMR Improvement
CiteULike Hit Ratio 27.35 74.43 78.02 83.05* 84.60 1.87%
NDCG 18.32 49.44 51.23 57.79* 61.07 5.68%
Company Mobile Hit Ratio 67.91 68.39 73.20* 68.23 75.74 3.47%
NDCG 45.55 50.74 51.80* 46.34 54.62 5.44%
Table 2: Results of Hit Ratio and NDCG () at the cut-off 10. The last column is the relative improvement of LCMR vs the best baseline (marked by a star (*)). The best scores are boldfaced.

Settings For BPRMF, we use the implementation of LightFM333https://github.com/lyst/lightfm which is a widely used CF library in various competitions. For neural CF methods, we use the implementation released by the authors444https://github.com/hexiangnan/neural_collaborative_filtering. For CTR, we use the implementation released by the authors555http://www.cs.cmu.edu/~chongw/citeulike/

. Our method is implemented using TensorFlow

666https://www.tensorflow.org running on the Nvidia GPU GTX TITAN X. As a general setting, parameters are randomly initialized from Gaussian

. The optimizer is adaptive moment estimation (Adam) 

[Kingma and Ba2015] with initial learning rate 0.001. The size of mini batch is 128. The ratio of negative sampling is 1. We tune hyper-parameters } (hops , size of memories , and dimensionality

) on the validation set. Best results are reported on the test set during 50 epochs where parameters

are fixed corresponding to the best validation performance (default values: ).

3.2 Comparisons of Different Recommender Systems

The comparison results are shown in Table 2 and we have the following observations.

Firstly, LCMR outperforms the traditional CF method BPRMF on the two datasets in terms of both Hit Ratio and NDCG. On CiteULike, LCMR obtains a large improvement in performance gain with relative 13.66% Hit Ratio and 23.52% NDCG. On Company Mobile, LCMR obtains a large improvement in performance gain with relative 10.75% Hit Ratio and 7.65% NDCG. Compared with the traditional matrix factorization based models where the dot product is used to match user and item, the results show the benefit of learning nonlinear interaction function through multiple nonlinear transformations.

Secondly, LCMR also outperforms the neural CF method MLP on the two datasets in terms of both Hit Ratio and NDCG. On CiteULike, LCMR obtains a large improvement in performance gain with relative 8.43% Hit Ratio and 19.21% NDCG. On Company Mobile, LCMR still obtains reasonably significant improvements with relative 3.47% Hit Ratio and 5.44% NDCG. Compared with pure neural CF methods which exploit the interaction data only, the results show the benefit of integrating text information through local memories module of LCMR.

Lastly, LCMR outperforms the extended CF method CTR by a large margin on Company Mobile dataset with relative 11.01% Hit Ratio and 17.87% NDCG; while LCMR still obtains reasonably significant improvements on CiteULike dataset with relative 1.87% Hit Ratio and 5.68% NDCG. Note that the Company Mobile dataset is short text of news titles and there is difficulty in learning topic distributions for the CTR model. Furthermore, the news articles have the timeliness and hotness characteristics and it may explain that the non-personalized method ItemPOP is a competitive baseline.

In summary, the empirical results of LCMR demonstrate the superiority of local and centralized memory modules to exploit the interaction and text information. In the following subsection, we investigate the contributions from the components of LCMR.

3.3 The Impact of Local and Centralized Memories Modules

Figure 2: Impact of Local and Centralized Memories Modules of the LCMR model on CiteULike

We have shown the effectiveness of local and centralized memories modules in our proposed LCMR framework. We now investigate the contribution of each memory module to the LCMR by eliminating the impact of local and centralized modules from it in turn777If we eliminate both the centralized and local memory modules, then LCMR computes where and model parameters , which is similar to the matrix factorization methods like BPRMF but has an extra nonlinear sigmoid transformation.:

  • LCMRlocal: Eliminating the impact of local memory module by setting in Eq.(6); that is, removing the local multi-hop blocks. The model parameters

  • LCMRcentral: Eliminating the impact of centralized memory module by setting in Eq.(6); that is, removing the centralized multi-hop blocks. The model parameters

The comparison results of LCMR and its two modules on CiteULike are shown in Figure 2. The performance degrades when either local or centralized memories modules are eliminated. In detail, LCMRlocal and LCMRcentral reduce 5.85% and 3.57% relative NDCG performance respectively, suggesting that both local and centralized memories contain essential information for recommender. Naturally, removing the local memory module degrades performance worse than removing the centralized memory module due to the losing of text information source.

3.4 Sensitivity to Embedding Dimensionality

The dimensionality of the joint embeddings, i.e. in Eq.(3), controls the model complexity. Figure 3 shows the sensitivity of our model to it on CiteULike. Note that The -axis in Figure 3 equals half of the dimensionality of the joint embeddings. In other words, it is the dimensionality of user (item) embeddings. It clearly indicates that the embedding should not be too small due to the possibility of information loss and the limits of expressiveness. It can get good results when the joint dimensionality is around 150.

Figure 3: Performance with Dimensionality of Embedding on CiteULike

3.5 Optimization and Running Time

We show optimization curves of performance and loss (averaged over all examples) against iterations on CiteULike in Figure 4. The model learns quickly in the first 20 iterations, improves slowly until 30, and stabilizes around 50, though losses continue to go down. The average time per epoch takes 64.9s and as a reference, it is 34.5s for MLP.

Figure 4: Optimization Curves of Performance and Loss with Iterations on CiteULike

4 Related Works

Recently, neural networks have been proposed to parameterize the interaction function between users and items. The MF Autoencoder 

[van Baalen2016] and NNMF model [Dziugaite and Roy2015] parameterize the interaction function by a multilayer FFNN. The MLP [He et al.2017] and Deep MF [Xue et al.2017] also use FFNNs. The basic MLP architecture is extended to regularize the factors of users and items by social and geographical information [Yang et al.2017]. Other neural approaches learn from the explicit feedback for rating prediction task [Sedhain et al.2015, Zheng et al.2017, Catherine and Cohen2017, Wu et al.2017]. We learn from the implicit feedback for top-N recommendation [Cremonesi et al.2010, Wu et al.2016].

Additional sources of information are integrated into CF to alleviate the data sparsity issues. Neural networks have been used to extract the features from auxiliary sources such as audio [Van den Oord et al.2013], text [Wang et al.2015, Kim et al.2016, Huang and Lin2016, Bansal et al.2016], image [He and McAuley2016, Chen et al.2017], and knowledge base [Zhang et al.2016]. As for the interaction data, these works rely on matrix factorization to model the user-item interactions. We learn the interaction function and exploit auxiliary sources jointly under a generic neural architecture by the modules of centralized and local memories.

5 Conclusion

We proposed a novel neural architecture, LCMR, to jointly model user-item interactions and integrate unstructured text for collaborative filtering with implicit feedback. By modeling text content as local memories, LCMR can attentively learn what to exploit from the unstructured text with the guidance of user-item interaction. LCMR is a unified framework as it embraces pure CF approaches and CF with auxiliary information. It shows better performance than traditional, neural, and extended approaches on two datasets under the Hit Ratio and NDCG metrics. Furthermore, we conducted ablation analyses to understand the contributions from the two memory components. We also showed the optimization curves of performance and loss.

The datasets contain other information such as tags of items and profiles of users which can be exploited to alleviate the cold-start issues in the future works. Besides basing on the memory networks, other kinds of neural networks like recurrent and convolutional networks may be used.

References

  • [Bansal et al.2016] Trapit Bansal, David Belanger, and Andrew McCallum. Ask the gru: Multi-task learning for deep text recommendations. In RecSys, pages 107–114, 2016.
  • [Catherine and Cohen2017] Rose Catherine and William Cohen. Transnets: Learning to transform for recommendation. arXiv preprint arXiv:1704.02298, 2017.
  • [Chen et al.2017] Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In SIGIR, 2017.
  • [Cremonesi et al.2010] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender algorithms on top-n recommendation tasks. In RecSys, pages 39–46, 2010.
  • [Dziugaite and Roy2015] Gintare Karolina Dziugaite and Daniel M Roy. Neural network matrix factorization. arXiv preprint arXiv:1511.06443, 2015.
  • [Graves et al.2014] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
  • [He and McAuley2016] Ruining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feedback. In AAAI, pages 144–150, 2016.
  • [He et al.2017] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In WWW, pages 173–182, 2017.
  • [Hu et al.2008] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In ICDM, pages 263–272, 2008.
  • [Huang and Lin2016] Yu-Yang Huang and Shou-De Lin. Transferring user interests across websites with unstructured text for cold-start recommendation. In EMNLP, pages 805–814, 2016.
  • [Kim et al.2016] Donghyun Kim, Chanyoung Park, Jinoh Oh, Sungyoung Lee, and Hwanjo Yu. Convolutional matrix factorization for document context-aware recommendation. In RecSys, pages 233–240, 2016.
  • [Kingma and Ba2015] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
  • [Koren et al.2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. IEEE Computer, 42(8), 2009.
  • [McAuley and Leskovec2013] Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In RecSys, pages 165–172, 2013.
  • [Miller et al.2016] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. In EMNLP, pages 1400–1409, 2016.
  • [Mnih and Salakhutdinov2008] Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In NIPS, pages 1257–1264, 2008.
  • [Pan et al.2008] Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. One-class collaborative filtering. In ICDM, 2008.
  • [Rendle et al.2009] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461, 2009.
  • [Sarwar et al.2001] Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In WWW, 2001.
  • [Sedhain et al.2015] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. Autorec: Autoencoders meet collaborative filtering. In WWW, pages 111–112, 2015.
  • [Sukhbaatar et al.2015] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In NIPS, pages 2440–2448, 2015.
  • [Tay et al.2018] Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. Latent relational metric learning via memory-based attention for collaborative ranking. In WWW, 2018.
  • [van Baalen2016] M. van Baalen. Chapter 3: Autoencoding variational matrix factorization. In Deep Matrix Factorization for Recommendation (Master Thesis), 2016.
  • [Van den Oord et al.2013] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In NIPS, pages 2643–2651, 2013.
  • [Vaswani et al.2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pages 6000–6010, 2017.
  • [Wang and Blei2011] Chong Wang and David M Blei. Collaborative topic modeling for recommending scientific articles. In SIGKDD, pages 448–456, 2011.
  • [Wang et al.2015] Hao Wang, Naiyan Wang, and Dit-Yan Yeung.

    Collaborative deep learning for recommender systems.

    In SIGKDD, pages 1235–1244, 2015.
  • [Weston et al.2016] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. ICLR, 2016.
  • [Wu et al.2016] Yao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. Collaborative denoising auto-encoders for top-n recommender systems. In WSDM, pages 153–162, 2016.
  • [Wu et al.2017] Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J Smola, and How Jing. Recurrent recommender networks. In WSDM, pages 495–503, 2017.
  • [Xue et al.2017] Hong-Jian Xue, Xin-Yu Dai, Jianbing Zhang, Shujian Huang, and Jiajun Chen. Deep matrix factorization models for recommender systems. In IJCAI, pages 3203–3209, 2017.
  • [Yang et al.2017] Carl Yang, Lanxiao Bai, Chao Zhang, Quan Yuan, and Jiawei Han.

    Bridging collaborative filtering and semi-supervised learning: A neural approach for poi recommendation.

    In SIGKDD, pages 1245–1254, 2017.
  • [Zhang et al.2016] Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. Collaborative knowledge base embedding for recommender systems. In SIGKDD, pages 353–362, 2016.
  • [Zheng et al.2017] Lei Zheng, Vahid Noroozi, and Philip S Yu. Joint deep modeling of users and items using reviews for recommendation. In WSDM, pages 425–434, 2017.