To facilitate the information seeking process for users in the age of data deluge, various information retrieval (IR) technologies have been widely deployed [Garcia-Molina et al.2011]. As a typical paradigm of information push, recommender systems have become a core service and a major monetization method for many customer-oriented systems [Wang et al.2018b]. Collaborative filtering (CF) is a key technique to build a personalized recommender system, which infers a user’s preference not only from her behavior data but also the behavior data of other users. Among the various CF methods, model-based CF, more specifically, matrix factorization based methods [Rendle et al.2009, He et al.2016b, Zhang et al.2016] are known to provide superior performance over others and have become the mainstream of recommendation research.
The key to design a CF model is in 1) how to represent a user and an item, and 2) how to model their interaction based on the representation. As a dominant model in CF, matrix factorization (MF) represents a user (or an item) as a vector of latent factors (also termed asembedding), and models an interaction as the inner product between the user embedding and item embedding. Many extensions have been developed for MF from both the modeling perspective [Wang et al.2015, Yu et al.2018, Wang et al.2018a] and learning perspective [Rendle et al.2009, Bayer et al.2017, He et al.2018]. For example, DeepMF [Xue et al.2017] extends MF by learning embeddings with deep neural networks, BPR [Rendle et al.2009] learns MF from implicit feedback with a pair-wise ranking objective, and the recently proposed adversarial personalized ranking (APR) [He et al.2018] employs an adversarial training procedure to learn MF.
Despite its effectiveness and many subsequent developments, we point out that MF has an inherent limitation in its model design. Specifically, it uses a fixed and data-independent function — i.e., the inner product — as the interaction function [He et al.2017]. As a result, it essentially assumes that the embedding dimensions (i.e., dimensions of the embedding space) are independent with each other and contribute equally for the prediction of all data points. This assumption is impractical, since the embedding dimensions could be interpreted as certain properties of items [Zhang et al.2014], which are not necessarily to be independent. Moreover, this assumption has shown to be sub-optimal for learning from real-world feedback data that has rich yet complicated patterns, since several recent efforts on neural recommender models [Tay et al.2018, Bai et al.2017] have demonstrated that better recommendation performance can be obtained by learning the interaction function from data.
Among the neural network models for CF, neural matrix factorization (NeuMF) [He et al.2017]
provides state-of-the-art performance by complementing the inner product with an adaptable multiple-layer perceptron (MLP) in learning the interaction function. Later on, using multiple nonlinear layers above the embedding layer has become a prevalent choice to learn the interaction function. Specifically, two common designs are placing a MLP above the concatenation[He et al.2017, Bai et al.2017] and the element-wise product [Zhang et al.2017, Wang et al.2017] of user embedding and item embedding. We argue that a potential limitation of such two designs is that there are few correlations between embedding dimensions being modeled. Although the following MLP is theoretically capable of learning any continuous function according to the universal approximation theorem [Hornik1991], there is no practical guarantee that the dimension correlations can be effectively captured with current optimization techniques.
In this work, we propose a new architecture for neural collaborative filtering (NCF) by integrating the correlations between embedding dimensions into modeling. Specifically, we propose to use an outer product operation above the embedding layer, explicitly capturing the pairwise correlations between embedding dimensions. We term the correlation matrix obtained by outer product as the interaction map, which is a matrix where denotes the embedding size. The interaction map is rather suitable for the CF task, since it not only subsumes the interaction signal used in MF (its diagonal elements correspond to the intermediate results of inner product), but also includes all other pairwise correlations. Such rich semantics in the interaction map facilitate the following non-linear layers to learn possible high-order dimension correlations. Moreover, the matrix form of the interaction map makes it feasible to learn the interaction function with the effective convolutional neural network (CNN), which is known to generalize better and is more easily to go deep than the fully connected MLP.
The contributions of this paper are as follows.
We propose a new neural network framework ONCF, which supercharges NCF modeling with an outer product operation to model pairwise correlations between embedding dimensions.
We propose a novel model named ConvNCF under the ONCF framework, which leverages CNN to learn high-order correlations among embedding dimensions from locally to globally in a hierarchical way.
We conduct extensive experiments on two public implicit feedback data, which demonstrate the effectiveness and rationality of ONCF methods.
This is the first work that uses CNN to learn the interaction function between user embedding and item embedding. It opens new doors of exploring the advanced and fastly evovling CNN methods for recommendation research.
2 Proposed Methods
We first present the Outer product based Neural Collaborative Filtering (ONCF) framework. We then elaborate our proposed Convolutional NCF (ConvNCF) model, an instantiation of ONCF that uses CNN to learn the interaction function based on the interaction map. Before delving into the technical details, we first introduce some basic notations.
Throughout the paper, we use bold uppercase letter (e.g., P) to denote a matrix, bold lowercase letter to denote a vector (e.g.,
), and calligraphic uppercase letter to denote a tensor (e.g.,). Moreover, scalar denotes the -th element of matrix P, and vector denotes the -th row vector in P. Let be 3D tensor, then scalar denotes the -th element of tensor , and vector denotes the slice of at the element .
2.1 ONCF framework
illustrates the ONCF framework. The target of modeling is to estimate the matching score between userand item , i.e., ; and then we can generate a personalized recommendation list of items for a user based on the scores.
Input and Embedding Layer.
Given a user and an item
and their features (e.g., ID, user gender, item category etc.), we first employ one-hot encoding on their features. Letand be the feature vector for user and item , respectively, we can obtain their embeddings and via
where and are the embedding matrix for user features and item features, respectively; and denote the embedding size, number of user features, and number of item features, respectively. Note that in the pure CF case, only the ID feature will be used to describe a user and an item [He et al.2017], and thus and are the number of users and number of items, respectively.
Above the embedding layer, we propose to use an outer product operation on and to obtain the interaction map:
where E is a matrix, in which each element is evaluated as: .
, we argue that using outer product is more advantageous in threefold: 1) it subsumes matrix factorization (MF) — the dominant method for CF — which considers only diagonal elements in our interaction map; 2) it encodes more signal than MF by accounting for the correlations between different embedding dimensions; and 3) it is more meaningful than the simple concatenation operation, which only retains the original information in embeddings without modeling any correlation. Moreover, it has been recently shown that, modeling the interaction of feature embeddings explicitly is particularly useful for a deep learning model to generalize well on sparse data, whereas using concatenation is sub-optimal[He and Chua2017, Beutel et al.2018].
Lastly, another potential benefit of the interaction map lies in its 2D matrix format — which is the same as an image. In this respect, the pairwise correlations encoded in the interaction map can be seen as the local features of an “image”. As we all know, deep learning methods achieve the most success in computer vision domain, and many powerful deep models especially the ones based on CNN (e.g., ResNet[He et al.2016a] and DenseNet [Huang et al.2017]) have been developed for learning from 2D image data. Building a 2D interaction map allows these powerful CNN models to be also applied to learn the interaction function for the recommendation task.
Above the interaction map is a stack of hidden layers, which targets at extracting useful signal from the interaction map. It is subjected to design and can be abstracted as , where denotes the model of hidden layers that has parameters , and g is the output vector to be used for the final prediction. Technically speaking, can be designed as any function that takes a matrix as input and outputs a vector. In Section 2.2, we elaborate how CNN can be employed to extract signal from the interaction map.
The prediction layer takes in vector g and outputs the prediction score as: , where vector w re-weights the interaction signal in g. To summarize, the model parameters of our ONCF framework are .
2.1.1 Learning ONCF for Personalized Ranking
Recommendation is a personalized ranking task. To this end, we consider learning parameters of ONCF with a ranking-aware objective. In the NCF paper [He et al.2017], the authors advocate the use of a pointwise classification loss to learn models from implicit feedback. However, another more reasonable assumption is that observed interactions should be ranked higher than the unobserved ones. To implement this idea, [Rendle et al.2009] proposed a Bayesian Personalized Ranking (BPR) objective function as follows:
are parameter specific regularization hyperparameters to prevent overfitting, anddenotes the set of training instances: , where denotes the set of items that has been consumed by user . By minimizing the BPR loss, we tailor the ONCF framework for correctly predicting the relative orders between interactions, rather than their absolute scores as optimized in pointwise loss [He et al.2017, He et al.2016b]. This can be more beneficial for addressing the personalized ranking task.
It is worth pointing out that in our ONCF framework, the weight vector w can control the magnitude of the value of for all predictions. As a result, scaling up w can increase the margin for all training instances and thus decrease the training loss. To avoid such trivial solution in optimizing ONCF, it is crucial to enforce regularization or the max-norm constraint on w. Moreover, we are aware of other pairwise objectives have also been widely used for personalized ranking, such as the L2 square loss [Wang et al.2017]. We leave this exploration for ONCF as future work, as our initial experiments show that optimizing ONCF with the BPR objective leads to good top- recommendation performance.
2.2 Convolutional NCF
Motivation: Drawback of MLP.
In ONCF, the choice of hidden layers has a large impact on its performance. A straightforward solution is to use the MLP network as proposed in NCF [He et al.2017]; note that to apply MLP on the 2D interaction matrix , we can flat E to a vector of size . Despite that MLP is theoretically guaranteed to have a strong representation ability [Hornik1991], its main drawback of having a large number of parameters can not be ignored. As an example, assuming we set the embedding size of a ONCF model as 64 (i.e., ) and follow the common practice of the half-size tower structure. In this case, even a 1-layer MLP has (i.e., ) parameters, not to mention the use of more layers. We argue that such a large number of parameters makes MLP prohibitive to be used in ONCF because of three reasons: 1) It requires powerful machines with large memories to store the model; and 2) It needs a large number of training data to learn the model well; and 3) It needs to be carefully tuned on the regularization of each layer to ensure good generalization performance222In fact, another empirical evidence is that most papers used MLP with at most 3 hidden layers, and the performance only improves slightly (or even degrades) with more layers [He et al.2017, Covington et al.2016, He and Chua2017].
The ConvNCF Model.
To address the drawback of MLP, we propose to employ CNN above the interaction map to extract signals. As CNN stacks layers in a locally connected manner, it utilizes much fewer parameters than MLP. This allows us to build deeper models than MLP easily, and benefits the learning of high-order correlations among embedding dimensions. Figure 2
shows an illustrative example of our ConvNCF model. Note that due to the complicated concepts behind CNN (e.g., stride, padding etc.), we are not ambitious to give a systematic formulation of our ConvNCF model here. Instead, without loss of generality, we explain ConvNCF of this specific setting, since it has empirically shown good performance in our experiments. Technically speaking, any structure of CNN and parameter setting can be employed in our ConvNCF model. First, in Figure2, the size of input interaction map is , and the model has 6 hidden layers, where each hidden layer has 32 feature maps. A feature map in hidden layer is represented as a 2D matrix ; since we set the stride to 2, the size of is half of its previous layer , e.g. and . All feature maps of Layer can be represented as a 3D tensor .
Given the input interaction map E, we can first get the feature maps of Layer 1 as follows:
where denotes the bias term for Layer 1, and
is a 3D tensor denoting the convolution filter for generating feature maps of Layer 1. We use the rectifer unit as activation function, a common choice in CNN to build deep models. Following the similar convolution operation, we can get the feature maps for the following layers. The only difference is that from Layer 1 on, the input to the next layerbecomes a 3D tensor :
where denotes the bias term for Layer , and denote the 4D convolution filter for Layer . The output of the last layer is a tensor of dimension , which can be seen as a vector and is projected to the final prediction score with a weight vector w.
Note that convolution filter can be seen as the “locally connected weight matrix” for a layer, since it is shared in generating all entries of the feature maps of the layer. This significantly reduces the number of parameters of a convolutional layer compared to that of a fully connected layer. Specifically, in contrast to the 1-layer MLP that has over 8 millions parameters, the above 6-layer CNN has only about 20 thousands parameters, which are several magnitudes smaller. This makes our ConvNCF more stable and generalizable than MLP.
Rationality of ConvNCF.
Here we give some intuitions on how ConvNCF can capture high-order correlations among embedding dimensions. In the interaction map E, each entry encodes the second-order correlation between the dimension and . Next, each hidden layer captures the correlations of a local area333The size of the local area is determined by our setting of the filter size, which is subjected to change with different settings. of its previous layer . As an example, the entry in Layer 1 is dependent on four elements , which means that it captures the 4-order correlations among the embedding dimensions . Following the same reasoning process, each entry in hidden layer can be seen as capturing the correlations in a local area of size in the interaction map E. As such, an entry in the last hidden layer encodes the correlations among all dimensions. Through this way of stacking multiple convolutional layers, we allow ConvNCF to learn high-order correlations among embedding dimensions from locally to globally, based on the 2D interaction map.
2.2.1 Training Details
We optimize ConvNCF with the BPR objective with mini-batch Adagrad [Duchi et al.2011]
. Specifically, in each epoch, we first shuffle all observed interactions, and then get a mini-batch in a sequential way; given the mini-batch of observed interactions, we then generate negative examples on the fly to get the training triplets. The negative examples are randomly sampled from a uniform distribution; while recent efforts show that a better negative sampler can further improve the performance[Ding et al.2018], we leave this exploration as future work. We pre-train the embedding layer with MF. After pre-training, considering that other parameters of ConvNCF are randomly initialized and the overall model is in a underfitting state, we train ConvNCF for epoch first without any regularization. For the following epochs, we enforce regularization on ConvNCF, including regularization on the embedding layer, convolution layers, and the output layer, respectively. Note that the regularization coefficients (especially for the output layer) have a very large impact on model performance.
To comprehensively evaluate our proposed method, we conduct experiments to answer the following research questions:
Can our proposed ConvNCF outperform the state-of-the-art recommendation methods?
Are the proposed outer product operation and the CNN layer helpful for learning from user-item interaction data and improving the recommendation performance?
How do the key hyperparameter in CNN (i.e., number of feature maps) affect ConvNCF’s performance?
3.1 Experimental Settings
Yelp. This is the Yelp Challenge data for user ratings on businesses. We filter the dataset following by [He et al.2016b]. Moreover, we merge the repetitive ratings at different timestamps to the earliest one, so as to study the performance of recommending novel items to a user. The final dataset obtains 25,815 users, 25,677 items, and 730,791 ratings.
Gowalla. This is the check-in dataset from Gowalla, a location-based social network, constructed by [Liang et al.2016] for item recommendation. To ensure the quality of the dataset, we perform a modest filtering on the data, retaining users with at least two interactions and items with at least ten interactions. The final dataset contains 54,156 users, 52,400 items, and 1,249,703 interactions.
For each user in the dataset, we holdout the latest one interaction as the testing positive sample, and then pair it with items that the user did not rate before as the negative samples. Each method then generates predictions for these user-item interactions. To evaluate the results, we adopt two metrics Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG), same as [He et al.2017]. HR@ is a recall-based metric, measuring whether the testing item is in the top- position (1 for yes and 0 otherwise). NDCG@ assigns the higher scores to the items within the top positions of the ranking list. To eliminate the effect of random oscillation, we report the average scores of the last ten epochs after convergence.
To justify the effectiveness of our proposed ConvNCF, we study the performance of the following methods:
1. ItemPop ranks the items based on their popularity, which is calculated by the number of interactions. It is always taken as a benchmark for recommender algorithms.
2. MF-BPR [Rendle et al.2009] optimizes the standard MF model with the pairwise BPR ranking loss.
3. MLP [He et al.2017] is a NCF method that concatenates user embedding and item embedding to feed to the standard MLP for learning the interaction function.
4. JRL [Zhang et al.2017] is a NCF method that places a MLP above the element-wise product of user embedding and item embedding. Its difference with GMF [He et al.2017] is that JRL uses multiple hidden layers above the element-wise product, while GMF directly outputs the prediction score.
5. NeuMF [He et al.2017] is the state-of-the-art method for item recommendation, which combines hidden layer of GMF and MLP to learn the user-item interaction function.
We implement our methods with Tensorflow, which is available at:https://github.com/duxy-me/ConvNCF. We randomly holdout 1 training interaction for each user as the validation set to tune hyperparameters. We evaluate ConvNCF of the specific setting as illustrated in Figure 2. The regularization coefficients are separately tuned for the embedding layer, convolution layers, and output layer in the range of . For a fair comparison, we set the embedding size as 64 for all models and optimize them with the same BPR loss using mini-batch Adagrad (the learning rate is 0.05). For MLP, JRL and NeuMF that have multiple fully connected layers, we tuned the number of layers from 1 to 3 following the tower structure of [He et al.2017]. For all models besides MF-BPR, we pre-train their embedding layers using the MF-BPR, and the regularization for each method has been fairly tuned.
3.2 Performance Comparison (RQ1)
Table 1 shows the Top- recommendation performance on both datasets where is set to 5, 10, and 20. We have the following key observations:
ConvNCF achieves the best performance in general, and obtains high improvements over the state-of-the-art methods. This justifies the utility of ONCF framework that uses outer product to obtain the 2D interaction map, and the efficacy of CNN in learning high-order correlations among embedding dimensions.
JRL consistently outperforms MLP by a large margin on both datasets. This indicates that, explicitly modeling the correlations of embedding dimensions is rather helpful for the learning of the following hidden layers, even for simple correlations that assume dimensions are independent of each other. Meanwhile, it reveals the practical difficulties to train MLP well, although it has strong representation ability in principle [Hornik1991].
3.3 Efficacy of Outer Product and CNN (RQ2)
Due to space limitation, for the blow two studies, we only show the results of NDCG, and the results of HR admit the same trend thus they are omitted.
Efficacy of Outer Product.
To show the effect of outer product, we replace it with the two common choices in existing solutions — concatenation (i.e., MLP) and element-wise product (i.e., GMF and JRL). We compare their performance with ConvNCF in each epoch in Figure 3. We observe that ConvNCF outperforms other methods by a large margin on both datasets, verifying the positive effect of using outer product above the embedding layer. Specifically, the improvements over GMF and JRL demonstrate that explicitly modeling the correlations between different embedding dimensions are useful. Lastly, the rather weak and unstable performance of MLP imply the difficulties to train MLP well, especially when the low-level has fewer semantics about the feature interactions. This is consistent with the recent finding of [He and Chua2017] in using MLP for sparse data prediction. .
Efficacy of CNN.
To make a fair comparison between CNN and MLP under our ONCF framework, we use MLP to learn from the same interaction map generated by outer product. Specifically, we first flatten the interaction as a dimensional vector, and then place a 3-layer MLP above it. We term this method as ONCF-mlp. Figure 4
compares its performance with ConvNCF in each epoch. We can see that ONCF-mlp performs much worse than ConvNCF, in spite of the fact that it uses much more parameters (3 magnitudes) than ConvNCF. Another drawback of using such many parameters in ONCF-mlp is that it makes the model rather unstable, which is evidenced by its large variance in epoch. In contrast, our ConvNCF achieves much better and stable performance by using the locally connected CNN. These empirical evidence provide support for our motivation of designing ConvNCF and our discussion of MLP’s drawbacks in Section2.2.
3.4 Hyperparameter Study (RQ3)
Impact of Feature Map Number.
The number of feature maps in each CNN layer affects the representation ability of our ConvNCF. Figure 5 shows the performance of ConvNCF with respect to different numbers of feature maps. We can see that all the curves increase steadily and finally achieve similar performance, though there are some slight differences on the convergence curve. This reflects the strong expressiveness and generalization of using CNN under the ONCF framework since dramatically increasing the number of parameters of a neural network does not lead to overfitting. Consequently, our model is very suitable for practical use.
We presented a new neural network framework for collaborative filtering, named ONCF. The special design of ONCF is the use of an outer product operation above the embedding layer, which results in a semantic-rich interaction map that encodes pairwise correlations between embedding dimensions. This facilitates the following deep layers learning high-order correlations among embedding dimensions. To demonstrate this utility, we proposed a new model under the ONCF framework, named ConvNCF, which uses multiple convolution layers above the interaction map. Extensive experiments on two real-world datasets show that ConvNCF outperforms state-of-the-art methods in top- recommendation. In future, we will explore more advanced CNN models such as ResNet [He et al.2016a] and DenseNet [Huang et al.2017] to further explore the potentials of our ONCF framework. Moreover, we will extend ONCF to content-based recommendation scenarios [Chen et al.2017, Yu et al.2018], where the item features have richer semantics than just an ID. Particularly, we are interested in building recommender systems for multimedia items like images and videos, and textual items like news.
This work is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@SG Funding Initiative, by the 973 Program of China under Project No.: 2014CB347600, by the Natural Science Foundation of China under Grant No.: 61732007, 61702300, 61501063, 61502094, and 61501064, by the Scientific Research Foundation of Science and Technology Department of Sichuan Province under Grant No. 2016JY0240, and by the Natural Science Foundation of Heilongjiang Province of China (No.F2016002). Jinhui Tang is the corresponding author.
- [Bai et al.2017] Ting Bai, Ji-Rong Wen, Jun Zhang, and Wayne Xin Zhao. A neural collaborative filtering model with interaction-based neighborhood. In CIKM, pages 1979–1982, 2017.
- [Bayer et al.2017] Immanuel Bayer, Xiangnan He, Bhargav Kanagal, and Steffen Rendle. A generic coordinate descent framework for learning from implicit feedback. In WWW, pages 1341–1350, 2017.
- [Beutel et al.2018] Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H. Chi. Latent cross: Making use of context in recurrent recommender systems. In WSDM, pages 46–54, 2018.
- [Chen et al.2017] Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with item- and component-level attention. In SIGIR, pages 335–344, 2017.
- [Covington et al.2016] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In RecSys, pages 191–198, 2016.
- [Ding et al.2018] Jingtao Ding, Fuli Feng, Xiangnan He, Guanghui Yu, Yong Li, and Depeng Jin. An improved sampler for bayesian personalized ranking by leveraging view data. In WWW, pages 13–14, 2018.
[Duchi et al.2011]
John Duchi, Elad Hazan, and Yoram Singer.
Adaptive subgradient methods for online learning and stochastic
Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
- [Garcia-Molina et al.2011] Hector Garcia-Molina, Georgia Koutrika, and Aditya Parameswaran. Information seeking: convergence of search, recommendations, and advertising. Communications of the ACM, 54(11):121–130, 2011.
- [He and Chua2017] Xiangnan He and Tat-Seng Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, pages 355–364, 2017.
- [He et al.2016a] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
- [He et al.2016b] Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. Fast matrix factorization for online recommendation with implicit feedback. In SIGIR, pages 549–558, 2016.
- [He et al.2017] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In WWW, pages 173–182, 2017.
- [He et al.2018] Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. Adversarial personalized ranking for item recommendation. In SIGIR, 2018.
- [Hornik1991] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
- [Huang et al.2017] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In CVPR, pages 4700–4708, 2017.
- [Liang et al.2016] Dawen Liang, Laurent Charlin, James McInerney, and David M Blei. Modeling user exposure in recommendation. In WWW, pages 951–961, 2016.
- [Rendle et al.2009] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In UAI, pages 452–461, 2009.
- [Tay et al.2018] Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. Latent relational metric learning via memory-based attention for collaborative ranking. In WWW, pages 729–739, 2018.
- [Wang et al.2015] Suhang Wang, Jiliang Tang, Yilin Wang, and Huan Liu. Exploring implicit hierarchical structures for recommender systems. In IJCAI, pages 1813–1819, 2015.
- [Wang et al.2017] Xiang Wang, Xiangnan He, Liqiang Nie, and Tat-Seng Chua. Item silk road: Recommending items from information domains to social users. In SIGIR, pages 185–194, 2017.
- [Wang et al.2018a] Xiang Wang, Xiangnan He, Fuli Feng, Liqiang Nie, and Tat-Seng Chua. Tem: Tree-enhanced embedding model for explainable recommendation. In WWW, pages 1543–1552, 2018.
- [Wang et al.2018b] Zihan Wang, Ziheng Jiang, Zhaochun Ren, Jiliang Tang, and Dawei Yin. A path-constrained framework for discriminating substitutable and complementary products in e-commerce. In WSDM, pages 619–627, 2018.
- [Xue et al.2017] Hong-Jian Xue, Xinyu Dai, Jianbing Zhang, Shujian Huang, and Jiajun Chen. Deep matrix factorization models for recommender systems. In IJCAI, pages 3203–3209, 2017.
- [Yu et al.2018] Wenhui Yu, Huidi Zhang, Xiangnan He, Xu Chen, Li Xiong, and Zheng Qin. Aesthetic-based clothing recommendation. In WWW, pages 649–658, 2018.
[Zhang et al.2014]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma.
Explicit factor models for explainable recommendation based on phrase-level sentiment analysis.In SIGIR, pages 83–92, 2014.
- [Zhang et al.2016] Hanwang Zhang, Fumin Shen, Wei Liu, Xiangnan He, Huanbo Luan, and Tat-Seng Chua. Discrete collaborative filtering. In SIGIR, pages 325–334, 2016.
- [Zhang et al.2017] Yongfeng Zhang, Qingyao Ai, Xu Chen, and W Bruce Croft. Joint representation learning for top-n recommendation with heterogeneous information sources. In CIKM, pages 1449–1458, 2017.