I Introduction
Collaborative filtering (CF) is one of the most widely used methods in recommender systems. CF systems recommend similar items to users who share similar traits or similar tastes [1]. Learning to Rank (LTR) methods, which directly learn to accurately rank items based on the user’s ratings, rankings, or implicit feedback over a set of items, are widely used to learn the perfect rankings for topn recommendation scenarios [2, 3].
Ia Related Work
Learning to Rank (LTR) methods can be categorized into pointwise, pairwise and listwise methods. Pointwise methods learn ranking models from the scores assigned by users to individual items (see e.g., [4]). Pairwise methods (e.g., BPR [5]
), learn binary classifiers that compare ordered pairs of items to decide whether the first item is preferred to the second. The applicability of such methods is limited by the high computational cost of pairwise comparisons of userrated items in generating the training samples for the binary classifiers
[6]. Listwise methods leverage the entire list of items consumed by the users to optimize a listwise ranking loss function or the probability of permutations that map items to ranks
[7, 8]. Typically, such methods optimize a smooth approximation of a loss function that measures the distance between the reference lists of ranked items in the training data and the ranked list of items produced by the ranking model. For example, CLiMF [3], which optimizes a smooth lower bound of meanreciprocal rank (MRR), aims at ranking a small set of mostpreferred items at the top of the list; TFMAP [9] optimizes the mean average precision (MAP) of topranked items for each user in a given context. Other methods optimize discounted cumulative gain (DCG) or normalized DCG (NDCG) can be found in [10, 11]. Examples of listwise methods that optimize the probability of permutations that map items to ranks include: ListPMF, which represents each user as a probability distribution of the permutations over rated items based on the PlackettLuce model
[12]; ListRank [13], which aims to identify a ranking permutation that minimizes the crossentropy between the distribution of the observed ranking of items based on user ratings and the predicted rankings with respect to the topranked item; or methods that optimize the logposterior over the predicted preference order with the observed preference orders [12]; and methods that leverage deep neural nets (e.g., [4]) to learn the nonlinear interaction between useritem pairs (See [14] for a survey of such methods).Existing LTR approaches suffer from several limitations. Although, in practical applications, only the top (say N) items in the ranked list are of interest, and the lowerranked ratings in the list are less reliable, most existing LTR methods are optimized on the ranks of the entire lists, which, could potentially reduce the ranking quality of the topranked items. Furthermore, the computational complexity of straightforward approaches to optimizing ranking measures (e.g., DCG [15], MRR [3], AUC [11] or MAP [9]), scale quadratically with (the average number of observed items across all users), which renders such methods impractical in largescale realworld settings.
IB Overview and Contributions
To address the limitations of existing LTR systems, we propose TopNRank, a novel latent factor based listwise ranking model for topN recommendation problem which directly optimizes a novel weighted “topheavy” truncated variant of the DCG ranking measure, namely, wDCG@N. Since in many situations, the users only refer to the topranked items in the list, the higher positions often have more impact on the ranking score than the lower ones. Our proposed measure, wDCG@N, differs from the conventional DCG in two important aspects: (i) It considers only on the top N items in the ranked lists, thereby eliminating the impact of lowranked items; and (ii) it incorporates weights that allow the model to learn from multiple kinds of implicit feedback.
Because wDCG@N is nonsmooth, we introduce the rectified linear unit (ReLU) as a smoothing function, which is more suited to top N ranking problems than the traditional sigmoid function. ReLU not only eliminates the contribution of the lowranked items on our loss function, but also allows us to obtain a significantly faster variant of the wDCG@Nbased LTR approach (TopNRank.ReLU), yielding a substantial reduction in computational complexity from to , where denotes the dimension of latent factors,
denotes the batch size of users for the stochastic gradient descent algorithm and
is the average number of (observed) items, making TopNRank.ReLU scalable to largescale realworld settings.The main contributions of this paper can be summarized as follows:

We have introduced a novel listwise ranking model for topN recommendation, which directly optimizes a weighted topheavy truncated ranking objective function, wDCG@N. Our model improves the quality of the topN item lists by mitigating the impact of the lowerranked items, and is capable of handling multiple types of implicit feedback (if available).

We have introduced the rectified linear unit (ReLU) to smooth our objective function. We have demonstrated that ReLU could (1) eliminate the impact of the lowerranked items and (2) substantially speed up the calculations by careful algorithm design.

We have proposed a fast learning algorithm (TopNRank) for generic smoothing functions, and a substantially more efficient variant (TopNRank.ReLU) for the ReLU smoothing function which reduce the computational complexity from to .
We compared the performance of TopNRank and TopNRank.ReLU with several stateoftheart listwise LTR methods [10, 3, 16, 13, 12] using the MovieLens (20M) data set [17] and the Amazon video games data set [18]. All experiments were performed on Apache Spark cluster [19] and the raw data are stored on Hadoop Distribute File System (HDFS).
Ii Preliminaries
Let be the set of users, be the set of items and be the set of types of implicit feedback. The interactions of users with items and the associated implicit feedback are represented by , where the entry denotes the interaction of user with item and the associated implicit feedback . We further denote by the subset of items actually observed by or presented to . For each , we denote the rating of by and the position of based on ’s rank ordering of items by . We reserve the indexing letters to indicate arbitrary user in and to represent arbitrary item in .
Iia Latent Factor Model
Latent factor models (LFMs) are stateoftheart in terms of both the quality of recommendations as well as scalability [20]
. LFMs represent both users and items using lowdimensional vectors of latent factors. Let
be the set of latent factors such that is an matrix with the th row denoting the latent factors of and is an matrix with the th row denoting the latent factors of . The rank , of the latent factor matrices is much smaller than or . The rating for to is predicted by the dot product of and .IiB Discounted Cumulative Gain
The discounted Cumulative Gain (DCG) [21] is a widely used measure of quality of recommendations, which measures the degree to which higher ranked items are placed ahead of the lower ranked ones, with the contribution of lower ranked items discounted by a logarithmic factor. Let be a binary indicator to represent whether is relevant to , then DCG of is computed by:
(1) 
Notice that the ranked position (start from zero) of item can be computed by:
(2) 
where is an indicator function with if is true and otherwise . Given our emphasis on getting the top rated items ranked correctly in the list of recommended items, DCG appears to be good criterion to optimize on. However, as evident from (1), DCG suffers from two important limitations: (i) Although DCG deemphasizes the contribution of the lower ranked items, it does not eliminate the collective effect of a large number of lower ranked items, even if the ranking of such lower ranked items are less reliable. If the goal is to optimize the ranking of the N top rated items, it makes sense to tailor objective function to focus explicitly on the ranking of the N toprated items and ignore the rest. (ii) Because DCG assigns equal weights to all implicit user feedback, it fails to account for differences in their trustworthiness.
Iii TopNRank
We proceed to introduce wDCG@N, a variant of DCG that overcomes its drawbacks. We then describe two smoothing functions (sigmoid and rectified linear unit (ReLU)) that can convert wDCG@N to a smoothed function that is amenable to being optimized using the standard optimization techniques. Finally, we show how to use the ReLU approximation of wDCG@N to obtain a scalable LTR algorithm.
Iiia TopNRank Training Objective
To address the limitations of DCG, we introduce wDCG@N, which is defined as follows:
(3) 
The first term in (3), is an indicator function that selects only the N toprated items and ignores the rest. The coefficient in the second term denotes the weight of the implicit feedback , which can model the reliability or importance of the feedback. The choice of is application and datadependent. For example, one can set to the number of items rated by (or presented to) the user [22] or the conversion rate (the proportion between buyers and users who conducted the implicit feedback). The resulting ranking objective can be formulated as:
(4) 
where denotes the norm and is the regularization coefficient that controls overfitting.
IiiB Smooth Approximations of TopNRank Training Objective
A nonsmooth training objective such as the one in (4) is challenging to optimize. Hence, we replace the nonsmooth training objective in (4) by its smooth approximation. Specifically, we approximate the indicator function in (2) by a smooth function such that with . In what follows, we will consider two different smooth functions that accomplish this goal.
Sigmoid function. The sigmoid function is widely used in existing listwise LTRbased recommendation models (e.g., [9, 3]) for its appealing performance in practice. Instead of adopting the sigmoid function directly, we introduce a scaling constant
to provides more accurate estimation, such that the indicator function is approximated by
where .Rectifier function. The rectified linear units (ReLU) [23], is a nonlinear smooth function with several properties that make it attractive in our setting. The onesided nature of ReLU () eliminates the contribution of of the lowerrated items to the objective function. Second, ReLU is computationally simper: only comparison and addition operations are required. Third, the form of ReLU permits an efficient algorithm (see Algorithm 2) with computational complexity that is linear in the average number of (observed) items across all users (see section IIIC2). When ReLU is used, we have
IiiB1 Parameterization of the Smooth Functions
Recall that the “topN term”, , was introduced to indicate whether item ranks among the top N item list. However, a poor choice of the hyperparameters in the smooth function could lead to gross underestimation or overestimation of and thus negate the utility of the “topN term”.
Here we examine how to choose the parameters of the sigmoid and the ReLU functions so that they behave as intended. In the case of the sigmoid function, we see that a choice of matters, with proper values of (e.g., ) yielding the desired behavior. In the case of ReLU, we can ensure the desired behavior by controlling the initial distribution of latent factors . Suppose that , where
is the width of the uniform distribution. Then according to the Central Limit Theorem, for arbitrary
,approximately follows a Gaussian distribution, i.e.,
with and . In order to ensure that , making use of the fact that , we find , and hence , which provides the basic setting for all of the TopN models using the ReLU as the smoothing function.IiiC Fast LTR Algorithms
IiiC1 Fast LTR Algorithm for Generic Smooth Function
To optimize the objective function reported in (4), we need to compute the predicted score of each item and then perform the pairwise comparison to determine their positions in the rankordered list. Because in most cases, the number of items far outnumbers the dimension of the latent factors , the complexity of a single pass is . One common practice is to exploit the sparsity of by considering only the predicted scores of the observed items, yielding a smooth objective function such as:
(5)  
The gradient of w.r.t. is given by (6).
(6)  
The gradients for w.r.t. are and . is the derivative of smooth function which is presented in section IIIB. The pseudocode for TopNRank (using stochastic gradient descent) is given in Algorithm 1.
Similar to [3], the computational complexity of TopNRank for one iteration is ( denotes the average number of observed items across all users).
IiiC2 Enhanced LTR Algorithm for Largescale TopN Recommendation
Algorithm 1 can be intractable in largescale systems with massive number of items. The use of ReLU permits a more efficient version of TopNRank (denoted as TopNRank.ReLU) to further reduce the complexity to . The pseudocode for TopNRank.ReLU is given in Algorithm 2.
For a single user, step 5 and 12 is computed in . Note that (step 7 and step 14) and (step 8 and step 15) can be calculated in and respectively through stepbystep accumulation, the complexity of step 611 and step 1317 are . Therefore, the overall computational complexity of TopRank.ReLU for one iteration is . In practice, is usually very small (less than 20) even in largescale systems, Thus, we can expect that is of the same scale with , then the complexity is simplified to , making TopNRank.ReLU suitable for largescale settings with massive item sets.
Iv Experiments and Results
We report results of two sets of experiments. The first set of experiments compare the performance of TopNRank models using either the sigmoid and the ReLU functions for smoothing with or without the “topN truncation”. Our results show that TopNRank.ReLU (using “topN truncation” and the ReLU function, i.e., Algorithm 2) outperforms the other methods on both the benchmark data sets. The second set of experiments compare the performance of TopNRank.ReLU with several stateoftheart listwise LTR CF approaches. Our results show that TopNRank algorithms outperform the these methods on both the benchmark data sets.
All of our experiments were performed on an Apache Spark cluster [19] with four compute nodes (Intel Xeon 2.1 GHz CPU with 20G RAM per node) with the raw data stored on Hadoop Distributed File System (HDFS). The model parameters were tuned to optimize performance on the training data. We describe below the details of the experiments and the results.
Iva Experimental Setup
IvA1 Data Sets
We used two benchmark data sets in our experiments: (i) the Amazon video games data set [18], which contains a subset of video games product reviews (ratings, text, etc.) from Amazon. There are 7,077 users, 25,744 items and more than 1 million ratings in this data set. (ii) The MovieLens (20M) data set [17], which contains 138,493 users, 27,278 items and more than 20 millions of ratings. The ratings in both data sets are split to 15 stars, with more stars corresponding to higher ratings. We use only the user rating data to conduct the experiments.
IvA2 Evaluation Procedure
We first remove users who rated fewer than 10 items. For the remaining users, we convert the ratings to implicit feedback based on the item ratings provided by each user. That is, for each , we assign when and otherwise [10]. We randomly select half of the ratings provided by each user for training, and use the rest for evaluation. On each test run, we average the performance over all of the users. We repeat this process 5 times and report the performance averaged across the 5 independent experiments.
Data sets  Algorithms  NDCG@1  NDCG@3  NDCG@5  NDCG@10  NDCG@20 

Amazon Video Games  TopNRank.ReLU  0.8186  0.8009  0.8079  0.8334  0.8455 
nonTopN.ReLU  0.8033  0.7866  0.7976  0.8242  0.8350  
TopNRank.sgm  0.7956  0.7747  0.7861  0.8167  0.8269  
nonTopN.sgm  0.7871  0.7657  0.7780  0.8109  0.8196  
MovieLens  TopNRank.ReLU  0.7811  0.7648  0.7532  0.7469  0.7521 
nonTopN.ReLU  0.7775  0.7593  0.7466  0.7389  0.7430  
TopNRank.sgm  0.7784  0.7564  0.7415  0.7323  0.7346  
nonTopNRank.sgm  0.7575  0.7315  0.7121  0.6968  0.6954 
We measure the performance based only on the rated items as in [10]. Because we focus on the placement of the toprated items in the rankordered list, it is natural to use the Normalized Discounted Cumulative Gain (NDCG) [24] as the performance measure. In this paper, we report the average of NDCG@1 through NCDG@N across all users.
The definition of NDCG at the topN positions for a user is given by:
(7) 
where DCG@N is the DCG value for the topN ranked items as described in (1). IDCG@N is the perfect ranking score which is obtained when the ranked list is created by sorting the items in descending order of their implicit feedback values (ratings).
IvB Comparison of Variants of TopNRank
We compare the performance of LTR models trained with the smoothed and regularized wDCG@N objective using either the sigmoid and the ReLU functions for smoothing, with or without the “topN truncation”: (i) TopNRank.ReLU: our proposed TopNRank model trained to optimize wDCG@N smoothed using the ReLU function (Algorithm 2); (ii) nonTopN.ReLU: The LTR model trained to optimize wDCG smoothed using the ReLU; (iii) TopNRank.sgm: our proposed topN model trained to optimize wDCG@N smoothed using the sigmoid function (Algorithm 1); and (iv) nonTopN.sgm: The LTR model trained to optimize wDCG smoothed using the sigmoid function.
In these experiments, we set the number of latent factors and the number of items ranked, . For the sigmoid function, and for the ReLU function, (see section IIIB1). The regularization coefficient is set to ; and the batch size is set to 10% of the users in the training data. All methods are run until either maximum iteration is reached or sumofsquare distance between parameters of two consecutive runs falls below the threshold .
The results of our experiments are summarized in Table I. Our results clearly show that the TopNRank models with the “topN truncation” term in the objective function consistently and statistically significantly (based on paired Student’s test) outperform the non topN counterparts. This confirms our intuition that TopNRank models focus on correctly ordering the toprated items, and hence are resistant to the cumulative effect (often unreliable) of lowerrated items. The results in Table I also show that TopNRank.ReLU substantially outperforms TopNRank.sgm. Moreover, the performance of TopNRank.sgm is comparable to that of NonTopN.ReLU. We conclude that the ReLU function, with an appropriate choice of is better able to more accurately rank the toprated items. The runtime for TopNRank.ReLU is significantly lower than that of TopNRank.sgm (results not shown), proving the appealing efficiency of TopNRank.ReLU.
Data sets  Algorithms  NDCG@1  NDCG@3  NDCG@5  NDCG@10  NDCG@20 

Amazon Video Games  TopNRank.ReLU  0.8135  0.7964  0.8043  0.8325  0.8383 
MFADG  0.7809  0.7646  0.7762  0.8103  0.8178  
CLiMF  0.7101  0.7137  0.7388  0.7779  0.7829  
xCLiMF  0.709  0.7131  0.7381  0.7776  0.7823  
ListRank  0.7045  0.7106  0.7367  0.7761  0.7809  
ListPMFPL  0.7043  0.7123  0.7376  0.7762  0.78  
Movielens  TopNRank.ReLU  0.7827  0.7665  0.7548  0.7483  0.7531 
MFADG  0.7301  0.6993  0.6799  0.6681  0.6698  
CLiMF  0.7459  0.7187  0.7013  0.691  0.6943  
xCLiMF  0.7609  0.7406  0.7271  0.7193  0.7236  
ListRank  0.7657  0.7423  0.7284  0.7206  0.7232  
ListPMFPL  0.6981  0.6715  0.659  0.6568  0.6638 
IvC TopNRank.ReLU Compared with the StateoftheArt Listwise LTR Models
We compare TopNRank.ReLU with several stateoftheart listwise LTR CF approaches: (i) MFADG: An algorithm that optimizes the Averaged Discounted Gain (ADG), which is obtained by averaging the DCG across all users [10]. Similar to our work, MFADG is designed to work with implicit feedback data sets. The sampling parameter is fixed at 100; (ii) CLiMF
: A MF model that is designed to work with binarized implicit feedback data sets, which optimizes meanreciprocal rank (MRR)
[3]. Instead of directly optimizing MRR, CLiMF learns the latent factors by maximizing the smoothed lower bound of MRR; (iii) xCLiMF: An extension of CLiMF that optimizes the expected reciprocal rank (ERR), which is designed to work with graded user ratings [16]; (iv) ListRank: A MF model that optimizes the crossentropy between the distribution of the observed and predicted ratings using topone probability, which is obtained using the softmax function [13]; and (v) ListPMFPL: A listwise probabilistic matrix factorization method that maximizes the log posterior over the predicted rank order with the observed preference order, using the PlackettLuce model based permutation probability [12].The results of our experiments are summarized in Table II. TopNRank.ReLU consistently outperforms the baseline models on both the Amazon Video Game and MovieLens data sets, regardless of the length of recommended item lists. Student’s test further demonstrate the significance of our results (not shown). Although TopNRank.ReLU maximize wDCG on the top20 items, the results show that the model offers better quality of recommendations across the top 120 items relative to the baselines. This may be explained in part by the following limitations of the individual methods: CLiMF and xCLiMF are designed to optimize the smoothed reciprocal rank (RR), which does not fully exploit the user ratings, because of its emphasis on optimizing only a few of the relevant items for each user; MFADG maximizes an approximation of ADG, on a small set of sampled data which may limit the quality of the estimates; ListRank and ListPMFPL are designed for rating data, but assign the same weight to all items with the same rating. Perhaps more importantly, all of the methods except TopNRank.ReLU attempt to optimize the ranking over the entire set of the userrated items, as opposed to only the N topranked items, which makes them susceptible to the noise in the ratings of lowranked items.
V Summary and Discussion
In this paper, we proposed TopNRank, a novel family of listwise LearningtoRank models for reliably recommending the N topranked items. The proposed models optimize wDCG@N, a variant of the widely used cumulative discounted gain (DCG) objective function which differs from DCG in two important aspects: (1) It limits the evaluation of DCG only on the top N items in the ranked lists, thereby eliminating the impact of lowranked items on the learned ranking function; and (2) it incorporates weights that allow the model to learn from multiple kinds of implicit user feedback with differing levels of reliability or trustworthiness. Because wDCG@N is nonsmooth, we considered two smooth approximations of wDCG@N, using the traditional sigmoid function and the rectified linear unit (ReLU). We proposed a family of learningtorank algorithms (TopNRank) that work with any smooth objective function (e.g., smooth approximations of wDCG@N). We designed TopNRank.ReLU, a more efficient version of TopNRank that exploits the properties of ReLU function to reduce the computational complexity of TopNRank from quadratic to linear in the average number of items rated by users. The results of our experiments using two widely used benchmarks, namely, the Amazon Video Games data set and the MovieLens data set demonstrate that: (i) The “topN truncation” of the objective function substantially improves the ranking quality; (ii) using the ReLU for smoothing the wDCG@N objective function yields significant improvement in both ranking quality as well as runtime as compared to using the sigmoid function; and (iii) TopNRank.ReLU substantially outperforms the stateoftheart listwise ranking CF methods (MFADG, CLiMF, xCLiMF, ListRank, and ListPMFPL) in terms of ranking quality.
Some promising directions for further research include: (i) Fusing the proposed topN truncation component and ReLU smoothing function with different listwise LTR objectives (i.e., MAP, AUC or MRR); (ii) investigation of complex interaction structure of useritem pairs with the help of deep neural nets; (iii) extending the proposed model to tensor factorization or factorization machines to take in multiple types of features.
Acknowledgment
Dr. Jinlong Hu and Dr. Shoubin Dong were supported in part by the Scientific Research Joint Funds of Ministry of Education of China and China Mobile [No. MCM20150512], and the Natural Science Foundation of Guangdong Province of China [No. 2018A030313309]; Junjie Liang was supported in part by a research assistantship funded by the National Science Foundation through the grant [No. CCF 1518732] to Dr. Vasant G. Honavar. Dr. Vasant Honavar was supported in part by the Edward Frymoyer Endowed Chair in Information Sciences and Technology at Pennsylvania State University, and in part by the Sudha Murty Distinguished Visiting Chair in Neurocomputing and Data Science at the Indian Institute of Science.
References

[1]
A. Gunawardana and G. Shani, “A survey of accuracy evaluation metrics of recommendation tasks,”
Journal of Machine Learning Research
, vol. 10, no. Dec, pp. 2935–2962, 2009.  [2] T.Y. Liu, “Learning to rank for information retrieval,” Foundations and Trends® in Information Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
 [3] Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, N. Oliver, and A. Hanjalic, “CLiMF: learning to maximize reciprocal rank with collaborative lessismore filtering,” in Proceedings of the sixth ACM conference on Recommender systems. ACM, 2012, pp. 139–146.
 [4] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.S. Chua, “Neural collaborative filtering,” in Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017, pp. 173–182.

[5]
S. Rendle, C. Freudenthaler, Z. Gantner, and L. SchmidtThieme, “BPR:
Bayesian personalized ranking from implicit feedback,” in
Proceedings of the twentyfifth conference on uncertainty in artificial intelligence
. AUAI Press, 2009, pp. 452–461.  [6] S. Huang, S. Wang, T.Y. Liu, J. Ma, Z. Chen, and J. Veijalainen, “Listwise collaborative filtering,” in Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2015, pp. 343–352.
 [7] Z. C. T. Q. TieYan and L. M.F. T. H. Li, “Learning to Rank: From Pairwise Approach to Listwise Approach,” 2014.
 [8] F. Xia, T.Y. Liu, J. Wang, W. Zhang, and H. Li, “Listwise approach to learning to rank: theory and algorithm,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 1192–1199.
 [9] Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, A. Hanjalic, and N. Oliver, “TFMAP: optimizing MAP for topn contextaware recommendation,” in Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2012, pp. 155–164.
 [10] D. Lim, J. McAuley, and G. Lanckriet, “Topn recommendation with missing implicit feedback,” in Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 2015, pp. 309–312.
 [11] H. Steck, “Gaussian ranking by matrix factorization,” in Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 2015, pp. 115–122.
 [12] J. Liu, C. Wu, Y. Xiong, and W. Liu, “Listwise probabilistic matrix factorization for recommendation,” Information Sciences, vol. 278, pp. 434–447, 2014.
 [13] Y. Shi, M. Larson, and A. Hanjalic, “Listwise learning to rank with matrix factorization for collaborative filtering,” in Proceedings of the fourth ACM conference on Recommender systems. ACM, 2010, pp. 269–272.
 [14] S. Zhang, L. Yao, and A. Sun, “Deep learning based recommender system: A survey and new perspectives,” arXiv preprint arXiv:1707.07435, 2017.
 [15] N. Ifada and R. Nayak, “DoRank: DCG optimization for learningtorank in tagbased item recommendation systems,” in PacificAsia Conference on Knowledge Discovery and Data Mining. Springer, 2015, pp. 510–521.
 [16] Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, and A. Hanjalic, “xCLiMF: optimizing expected reciprocal rank for data with multiple levels of relevance,” in Proceedings of the 7th ACM conference on Recommender systems. ACM, 2013, pp. 431–434.
 [17] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 5, no. 4, p. 19, 2016.
 [18] J. McAuley and A. Yang, “Addressing complex and subjective productrelated queries with customer reviews,” in Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2016, pp. 625–635.
 [19] A. G. Shoro and T. R. Soomro, “Big data analysis: Apache spark perspective,” Global Journal of Computer Science and Technology, vol. 15, no. 1, 2015.
 [20] C. C. Aggarwal, “Modelbased collaborative filtering,” in Recommender Systems. Springer, 2016, pp. 71–138.
 [21] K. Järvelin and J. Kekäläinen, “Cumulated gainbased evaluation of ir techniques,” ACM Transactions on Information Systems (TOIS), vol. 20, no. 4, pp. 422–446, 2002.
 [22] Y. Koren, “Factorization meets the neighborhood: a multifaceted collaborative filtering model,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008, pp. 426–434.

[23]
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,”
Nature, vol. 521, no. 7553, pp. 436–444, 2015.  [24] K. Järvelin and J. Kekäläinen, “IR evaluation methods for retrieving highly relevant documents,” in Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2000, pp. 41–48.
Comments
There are no comments yet.