Top-N Recommender System via Matrix Completion

01/19/2016 ∙ by Zhao Kang, et al. ∙ Southern Illinois University 0

Top-N recommender systems have been investigated widely both in industry and academia. However, the recommendation quality is far from satisfactory. In this paper, we propose a simple yet promising algorithm. We fill the user-item matrix based on a low-rank assumption and simultaneously keep the original information. To do that, a nonconvex rank relaxation rather than the nuclear norm is adopted to provide a better rank approximation and an efficient optimization strategy is designed. A comprehensive set of experiments on real datasets demonstrates that our method pushes the accuracy of Top-N recommendation to a new level.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

The growth of online markets has made it increasingly difficult for people to find items which are interesting and useful to them. Top-N recommender systems have been widely adopted by the majority of e-commerce web sites to recommend size- ranked lists of items that best fit customers’ personal tastes and special needs [Linden, Smith, and York2003]

. It works by estimating a consumer’s response for new items, based on historical information, and suggesting to the consumer novel items for which the predicted response is high. In general, historical information can be obtained explicitly, for example, through ratings/reviews, or implicitly, from purchase history or access patterns

[Desrosiers and Karypis2011].

Over the past decade, a variety of approaches have been proposed for Top-N recommender systems [Ricci, Rokach, and Shapira2011]. They can be roughly divided into three categories: neighborhood-based collaborative filtering, model-based collaborative filtering, and ranking-based methods. The general principle of neighborhood-based methods is to identify the similarities among users/items [Deshpande and Karypis2004]. For example, item-based k-nearest-neighbor (ItemKNN) collaborative filtering methods first identify a set of similar items for each of the items that the consumer has purchased, and then recommend Top-N items based on those similar items. However, they suffer from low accuracy since they employ few item characteristics.

Model-based methods build a model and then generate recommendations. For instance, the widely studied matrix factorization (MF) methods employ the user-item similarities in their latent space to extract the user-item purchase patterns. Pure singular-value-decomposition-based (PureSVD) matrix factorization method

[Cremonesi, Koren, and Turrin2010]

characterizes users and items by the most principal singular vectors of the user-item matrix. A weighted regularized matrix factorization (WRMF)

[Pan et al.2008, Hu, Koren, and Volinsky2008] method applies a weighting matrix to differentiate the contributions from observed purchase/rating activities and unobserved ones.

The third category of methods rely on ranking/retrieval criteria. Here, Top-N recommendation is treated as a ranking problem. A Bayesian personalized ranking (BPR) [Rendle et al.2009] criterion, which is the maximum posterior estimator from a Bayesian analysis, is used to measure the difference between the rankings of user-purchased items and the rest items. BPR can be combined with ItemKNN (BPRkNN) and MF method (BPRMF). One common drawback of these approaches lies in low recommendation quality.

Recently, a novel Top-N recommendation method SLIM [Ning and Karypis2011] has been proposed. From user-item matrix of size , it first learns a sparse aggregation coefficient matrix by encoding each item as a linear combination of all other items and solving an -norm and -norm regularized optimization problem. Each entry describes the similarity between item and . SLIM has obtained better recommendation accuracy than the other state-of-the-art methods. However, SLIM can only capture relations between items that are co-purchased/co-rated by at least one user, while an intrinsic characteristic of recommender systems is sparsity due to the fact that users typically rate only a small portion of the available items.

To overcome the above limitation, LorSLIM [Cheng, Yin, and Yu2014] has also imposed a low-rank constraint on . It solves the following problem:

where is the nuclear norm of , defined as the sum of its singular values. Low-rank structure is motivated by the fact that a few latent variables from that explain items’ features in factor model are of low rank. After obtaining , the recommendation score for user about an un-purchased/-rated item is , where , is the -th row of , and is the -th column of . Thus . LorSLIM can model the relations between items even on sparse datasets and thus improves the performance.

To further boost the accuracy of Top-N recommender systems, we first fill the missing ratings by solving a nonconvex optimization problem, based on the assumption that the user’ ratings are affected by only a few factors and the resulting rating matrix should be of low rank [Lee et al.2014], and then make the Top-N recommendation. This is different from previous approaches: Middle values of the rating ranges, or the average user or item ratings are commonly utilized to fill out the missing ratings [Breese, Heckerman, and Kadie1998, Deshpande and Karypis2004]; a more reliable approach utilizes content information [Melville, Mooney, and Nagarajan2002, Li and Zaïane2004, Degemmis, Lops, and Semeraro2007], for example, the missing ratings are provided by autonomous agents called filterbots [Good et al.1999], which rate items based on some specific characteristics of their content; a low rank matrix factorization approach seeks to approximate by a multiplication of low rank factors [Yu et al.2009]. Experimental results demonstrate the superior recommendation quality of our approach.

Due to the inherent computational complexity of rank problems, the non-convex rank function is often relaxed to its convex surrogate, i.e. the nuclear norm [Candès and Recht2009, Recht, Xu, and Hassibi2008]. However, this substitution is not always valid and can lead to a biased solution [Shi and Yu2011, Kang, Peng, and Cheng2015b]. Matrix completion with nuclear norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly [Srebro and Salakhutdinov2010]. Nonconvex rank approximation has received significant attention [Zhong et al.2015, Kang and Cheng2015]. Thus we use log-determinant () function to approximate the rank function and design an effective optimization algorithm.

Problem Formulation

The incomplete user-item purchases/ratings matrix is denoted as of size . is 1 or a positive value if user has ever purchased/rated item ; otherwise it is 0. Our goal is to reconstruct a full matrix , which is supposed to be low-rank. Consider the following matrix completion problem:

(1)

where is the set of locations corresponding to the observed entries and

is an identity matrix. It is easy to show that

, i.e., is a tighter rank approximation function than the nuclear norm. also helps mitigate another inherent disadvantage of the nuclear norm, i.e., the imbalanced penalization of different singular values [Kang, Peng, and Cheng2015a]. Previously was suggested to restrict the rank of positive semidefinite matrix [Fazel, Hindi, and Boyd2003], which is not guaranteed for more general , and also is required to be small, which leads to significantly biased approximation for small singular values. Compared to some other nonconvex relaxations in the literature [Lu et al.2014], our formulation enjoys the simplicity and efficacy.

Methodology

Considering that the user-item matrix is often nonnegative, we add nonnegative constraint , i.e., element-wise positivity, for easy interpretation of the representation. Let be the orthogonal projection operator onto the span of matrices vanishing outside of (i.e., ) so that

Problem (1) can be reformulated as

(2)

where is the indicator function, defined element-wisely as

Notice that this is a nonconvex optimization problem, which is not easy to solve in general. Here we develop an effective optimization strategy based on augmented Lagrangian multiplier (ALM) method. By introducing an auxiliary variable , it has the following equivalent form

(3)

which has an augmented Lagrangian function of the form

(4)

where is a Lagrange multiplier and is a penalty parameter.

Then, we can apply the alternating minimization idea to update , , i.e., updating one of the two variables with the other fixed.

Given the current point , , , we update by solving

(5)

This can be converted to scalar minimization problems due to the following theorem [Kang et al.2015].

Theorem 1

If is a unitarily invariant function and SVD of is , then the optimal solution to the following problem

(6)

is with SVD , where ; moreover, , where is the vector of nonincreasing singular values of , then is obtained by using the Moreau-Yosida proximity operator , where , and

(7)

According to the first-order optimality condition, the gradient of the objective function of (7) with respect to each singular value should vanish. For function, we have

(8)

The above equation is quadratic and gives two roots. If , the minimizer will be 0; otherwise, there exists a unique minimizer. Finally, we obtain the update of variable with

(9)

Then we fix the values at the observed entries and obtain

(10)

To update , we need to solve

(11)

which yields the updating rule

(12)

Here is an element-wise operator. The complete procedure is outlined in Algorithm 1.

Input: Original imcomplete data matrix , parameters , .
Initialize: , .
REPEAT

1:  Obtain through (10).
2:  Update as (12).
3:  Update the Lagrangian multipliers by
4:  Update the parameter by .

UNTIL stopping criterion is met.

Algorithm 1 Solve (3)

To use the estimated matrix to make recommendation for user , we just sort ’s non-purchased/-rated items based on their scores in decreasing order and recommend the Top-N items.

Experimental Evaluation

Datasets

dataset #users #items #trns rsize csize density ratings
Delicious 1300 4516 17550 13.50 3.89 0.29% -
lastfm 8813 6038 332486 37.7 55.07 0.62% -
BX 4186 7733 182057 43.49 23.54 0.56% -
ML100K 943 1682 100000 106.04 59.45 6.30% 1-10
Netflix 6769 7026 116537 17.21 16.59 0.24% 1-5
Yahoo 7635 5252 212772 27.87 40.51 0.53% 1-5
  • The “#users”, “#items”, “#trns” columns show the number of users, number of items and number of transactions, respectively, in each dataset. The “rsize” and “csize” columns are the average number of ratings for each user and on each item (i.e., row density and column density of the user-item matrix), respectively, in each dataset. Column corresponding to “density” shows the density of each dataset (i.e., density=#trns/(#users#items)). The “ratings” column is the rating range of each dataset with granularity 1.

Table 1: The datasets used in evaluation

We evaluate the performance of our method on six different real datasets whose characteristics are summarized in Table 1. These datasets are from different sources and at different sparsity levels. They can be broadly categorized into two classes.

The first class includes Delicious, lastfm and BX. These three datasets have only implicit feedback (e.g., listening history), i.e., they are represented by binary matrices. In particular, Delicious was derived from the bookmarking and tagging information from a set of 2 users from Delicious social bookmarking system111http://www.delicious.com such that each URL was bookmarked by at least 3 users. Lastfm corresponds to music artist listening information which was obtained from the last.fm online music system222 http://www.last.fm , in which each music artist was listened to by at least 10 users and each user listened to at least 5 artists. BX is a subset from the Book-Crossing dataset333http://www.informatik.uni-freiburg.de/ cziegler/BX/ such that only implicit interactions were contained and each book was read by at least 10 users.

The second class contains ML100K, Netflix and Yahoo. All these datasets contain multi-value ratings. Specifically, the ML100K dataset corresponds to movie ratings and is a subset of the MovieLens research project444http://grouplens.org/datasets/movielens/. The Netflix is a subset of data extracted from Netflix Prize dataset555http://www.netflixprize.com/ and each user rated at least 10 movies. The Yahoo dataset is a subset obtained from Yahoo!Movies user ratings666http://webscope.sandbox.yahoo.com/catalog.php?datatype=r. In this dataset, each user rated at least 5 movies and each movie was rated by at least 3 users.

method Delicious lastfm
params HR ARHR params HR ARHR
ItemKNN 300 - - - 0.300 0.179 100 - - - 0.125 0.075
PureSVD 1000 10 - - 0.285 0.172 200 10 - - 0.134 0.078
WRMF 250 5 - - 0.330 0.198 100 3 - - 0.138 0.078
BPRKNN 1e-4 0.01 - - 0.326 0.187 1e-4 0.01 - - 0.145 0.083
BPRMF 300 0.1 - - 0.335 0.183 100 0.1 - - 0.129 0.073
SLIM 10 1 - - 0.343 0.213 5 0.5 - - 0.141 0.082
LorSLIM 10 1 3 3 0.360 0.227 5 1 3 3 0.187 0.105
Our 250 4 - - 0.382 0.241 0.03 1.5 - - 0.206 0.113
method BX ML100K
params HR ARHR params HR ARHR
ItemKNN 400 - - - 0.045 0.026 10 - - - 0.287 0.124
PureSVD 3000 10 - - 0.043 0.023 100 10 - - 0.324 0.132
WRMF 400 5 - - 0.047 0.027 50 1 - - 0.327 0.133
BPRKNN 1e-3 0.01 - - 0.047 0.028 2e-4 1e-4 - - 0.359 0.150
BPRMF 400 0.1 - - 0.048 0.027 200 0.1 - - 0.330 0.135
SLIM 20 0.5 - - 0.050 0.029 2 2 - - 0.343 0.147
LorSLIM 50 0.5 2 3 0.052 0.031 10 8 5 3 0.397 0.207
Our 1.2e-3 1.3 - - 0.065 0.043 6e-3 2.5 - - 0.428 0.215
method Netflix Yahoo
params HR ARHR params HR ARHR
ItemKNN 200 - - - 0.156 0.085 300 - - - 0.318 0.185
PureSVD 500 10 - - 0.158 0.089 2000 10 - - 0.210 0.118
WRMF 300 5 - - 0.172 0.095 100 4 - - 0.250 0.128
BPRKNN 2e-3 0.01 - - 0.165 0.090 0.02 1e-3 - - 0.310 0.182
BPRMF 300 0.1 - - 0.140 0.072 300 0.1 - - 0.308 0.180
SLIM 5 1.0 - - 0.173 0.098 10 1 - - 0.320 0.187
LorSLIM 10 3 5 3 0.196 0.111 10 1 2 3 0.334 0.191
Our 0.015 1.2 - - 0.226 0.127 5e-3 1.1 - 0.367 0.218
  • The parameters for each method are as follows: ItemKNN: the number of neighbors ; PureSVD: the number of singular values and the number of iterations during SVD; WRMF: the dimension of the latent space and the weight on purchases; BPRKNN: the learning rate and regularization parameter ; BPRMF: the dimension of the latent space and learning rate; SLIM: the -norm regularization parameter and the -norm regularization parameter ; LorSLIM: the -norm regularization parameter , the -norm regularization parameter , the nuclear norm regularization parameter and the auxiliary parameter . Our: auxiliary parameters and . N in this table is 10. Bold numbers are the best performance in terms of HR and ARHR for each dataset.

Table 2: Comparison of Top-N recommendation algorithms

Evaluation Methodology

We employ 5-fold Cross-Validation to demonstrate the efficacy of our proposed approach. For each run, each of the datasets is split into training and test sets by randomly selecting one of the non-zero entries for each user to be part of the test set777We use the same data as in [Cheng, Yin, and Yu2014], with partitioned datasets kindly provided by Yao Cheng.. The training set is used to train a model, then a size-N ranked list of recommended items for each user is generated. The evaluation of the model is conducted by comparing the recommendation list of each user and the item of that user in the test set. For the following results reported in this paper, is equal to 10.

Top-N recommendation is more like a ranking problem rather than a prediction task. The recommendation quality is measured by the hit-rate (HR) and the average reciprocal hit-rank (ARHR) [Deshpande and Karypis2004]. They directly measure the performance of the model on the ground truth data, i.e., what users have already provided feedback for. As pointed out in [Ning and Karypis2011], they are the most direct and meaningful measures in Top-N recommendation scenarios. HR is defined as

(13)

where #hits is the number of users whose item in the test set is recommended (i.e., hit) in the size-N recommendation list, and #users is the total number of users. An HR value of 1.0 indicates that the algorithm is able to always recommend the hidden item, whereas an HR value of 0.0 denotes that the algorithm is not able to recommend any of the hidden items.

A drawback of HR is that it treats all hits equally regardless of where they appear in the Top-N list. ARHR addresses it by rewarding each hit based on where it occurs in the Top-N list, which is defined as follows:

(14)

where is the position of the test item in the ranked Top-N list for the -th hit. That is, hits that occur earlier in the ranked list are weighted higher than those occur later, and thus ARHR measures how strongly an item is recommended. The highest value of ARHR is equal to the hit-rate and occurs when all the hits occur in the first position, whereas the lowest value is equal to HR/N when all the hits occur in the last position of the list.

Comparison Algorithms

We compare the performance of the proposed method888The implementation of our method is available at: https://github.com/sckangz/recom_mc. with seven other state-of-the-art Top-N recommendation algorithms, including the item neighborhood-based collaborative filtering method ItemKNN [Deshpande and Karypis2004], two MF-based methods PureSVD [Cremonesi, Koren, and Turrin2010] and WRMF [Hu, Koren, and Volinsky2008], two ranking/retrieval criteria based methods BPRMF and BPRKNN [Rendle et al.2009], SLIM [Ning and Karypis2011], and LorSLIM [Cheng, Yin, and Yu2014].

(a) Delicious
(b) lastfm
(c) BX
(d) ML100K
(e) Netflix
(f) Yahoo
Figure 1: Performance for Different Values of .

Experimental Results

Top-N recommendation performance

We report the comparison results with other competing methods in Table 2. The results show that our algorithm performs better than all the other methods across all the datasets 999A bug is found, so the result in published version is updated. We apologize for any inconvenience caused.. Specifically, in terms of HR, our method outperforms ItemKNN, PureSVD, WRMF, BPRKNN, BPRMF, SLIM and LorSLIM by 41%, 48.14%, 35.40%, 28.69%, 36.57%, 26.26%, 12.38% on average, respectively, over all the six datasets; with respect to ARHR, ItemKNN, PureSVD, WRMF, BPRKNN, BPRMF, SLIM and LorSLIM are improved by 48.55%, 60.38%, 48.58%, 37.14%, 49.47%, 31.94%, 14.15% on average, respectively.

Among the seven state-of-the-art algorithms, LorSLIM is substantially better than the others. Moreover, SLIM is a little better than others except on lastfm and ML100K among the rest six methods. Then BPRKNN performs best among the remaining five methods on average. Among the three MF-based models, BPRMF and WRMF have similar performance on most datasets and are much better than PureSVD on all datasets except on lastfm and ML100K.

Recommendation for Different Top-N

Figure 1 shows the performance achieved by the various methods for different values of for all six datasets. It demonstrates that the proposed method outperforms other algorithms in all scenarios. What is more, it is evident that the difference in performance between our approach and the other methods are consistently significant. It is interesting to note that LorSLIM, the second most competitive method, may be worse than some of the rest methods when is large.

Matrix Reconstruction

We compare our method with LorSLIM by looking at how they reconstruct the user-item matrix. We take ML100K as an example, whose density is 6.30% and the mean for those non-zero elements is 3.53. The reconstructed matrix from LorSLIM has a density of 13.61%, whose non-zero values have a mean of 0.046. For those 6.30% non-zero entries in , recovers 70.68% of them and their mean value is 0.0665. This demonstrates that lots of information is lost. On the contrary, our approach fully preserves the original information thanks to the constraint condition in our model. Our method recovers all zero values with a mean of 0.554, which is much higher than 0.046. This suggests that our method recovers better than LorSLIM. This may explain the superior performance of our method.

Parameter Tunning

Figure 2: Influence of and on HR for Delicious dataset.

Although our model is parameter-free, we introduce the auxiliary parameter during the optimization. In alternating direction method of multipliers (ADMM) [Yang and Yuan2013], is fixed and it is not easy to choose an optimal value to balance the computational cost. Thus, a dynamical , increasing at a rate of , is preferred in real applications. controls the convergence speed. The larger is, the fewer iterations are to obtain the convergence, but meanwhile we may lose some precision. We show the effects of different initializations and on HR on dataset Delicious in Figure 2. It illustrates that our experimental results are not sensitive to them, which is reasonable since they are auxiliary parameters controlling mainly the convergence speed. In contrast, LorSLIM needs to tune four parameters, which are time consuming and not easy to operate.

Efficiency Analysis

Figure 3: Influence of on time.

The time complexity of our algorithm is mainly from SVD. Exact SVD of a matrix has a time complexity of ), in this paper we seek a low-rank matrix and thus only need a few principal singular vectors/values. Packages like PROPACK [Larsen2004] can compute a rank SVD with a cost of , which can be advantageous when . In fact, our algorithm is much faster than LorSLIM. Among the six datasets, ML100K and lastfm datasets have the smallest and largest sizes, respectively. Our method needs 9s and 5080s, respectively, on these two datasets, while LorSLIM takes 617s and 32974s. The time is measured on the same machine with an Intel Xeon E3-1240 3.40GHz CPU that has 4 cores and 8GB memory, running Ubuntu and Matlab (R2014a). Furthermore, without losing too much accuracy, can speed up our algorithm considerably. This is verified by Figure 3, which shows the computational time of our method on Delicious with varying .

Conclusion

In this paper, we present a matrix completion based method for the Top-N recommendation problem. The proposed method recovers the user-item matrix by solving a rank minimization problem. To better approximate the rank, a nonconvex function is applied. We conduct a comprehensive set of experiments on multiple datasets and compare its performance against that of other state-of-the-art Top-N recommendation algorithms. It turns out that our algorithm generates high quality recommendations, which improves the performance of the rest of methods considerably. This makes our approach usable in real-world scenarios.

Acknowledgements

This work is supported by US National Science Foundation Grants IIS 1218712. Q. Cheng is the corresponding author.

References

  • [Breese, Heckerman, and Kadie1998] Breese, J. S.; Heckerman, D.; and Kadie, C. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In

    Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence

    , 43–52.
    Morgan Kaufmann Publishers Inc.
  • [Candès and Recht2009] Candès, E. J., and Recht, B. 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics 9(6):717–772.
  • [Cheng, Yin, and Yu2014] Cheng, Y.; Yin, L.; and Yu, Y. 2014. Lorslim: Low rank sparse linear methods for top-n recommendations. In Data Mining (ICDM), 2014 IEEE International Conference on, 90–99. IEEE.
  • [Cremonesi, Koren, and Turrin2010] Cremonesi, P.; Koren, Y.; and Turrin, R. 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, 39–46. ACM.
  • [Degemmis, Lops, and Semeraro2007] Degemmis, M.; Lops, P.; and Semeraro, G. 2007. A content-collaborative recommender that exploits wordnet-based user profiles for neighborhood formation. User Modeling and User-Adapted Interaction 17(3):217–255.
  • [Deshpande and Karypis2004] Deshpande, M., and Karypis, G. 2004. Item-based top-n recommendation algorithms. ACM Transactions on Information Systems (TOIS) 22(1):143–177.
  • [Desrosiers and Karypis2011] Desrosiers, C., and Karypis, G. 2011. A comprehensive survey of neighborhood-based recommendation methods. In Recommender systems handbook. Springer. 107–144.
  • [Fazel, Hindi, and Boyd2003] Fazel, M.; Hindi, H.; and Boyd, S. P. 2003.

    Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices.

    In American Control Conference, 2003. Proceedings of the 2003, volume 3, 2156–2162. IEEE.
  • [Good et al.1999] Good, N.; Schafer, J. B.; Konstan, J. A.; Borchers, A.; Sarwar, B.; Herlocker, J.; and Riedl, J. 1999. Combining collaborative filtering with personal agents for better recommendations. In AAAI/IAAI, 439–446.
  • [Hu, Koren, and Volinsky2008] Hu, Y.; Koren, Y.; and Volinsky, C. 2008. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, 263–272. IEEE.
  • [Kang and Cheng2015] Kang, Zhao ang Peng, C., and Cheng, Q. 2015. Robust pca via nonconvex rank approximation. In Data Mining (ICDM), 2015 IEEE International Conference on, 211–220. IEEE.
  • [Kang et al.2015] Kang, Z.; Peng, C.; Cheng, J.; and Cheng, Q. 2015. Logdet rank minimization with application to subspace clustering. Computational intelligence and neuroscience 2015:68.
  • [Kang, Peng, and Cheng2015a] Kang, Z.; Peng, C.; and Cheng, Q. 2015a. Robust subspace clustering via robust subspace clustering via smoothed rank approximation. SIGNAL PROCESSING LETTERS, IEEE 22(11):2088–2092.
  • [Kang, Peng, and Cheng2015b] Kang, Z.; Peng, C.; and Cheng, Q. 2015b. Robust subspace clustering via tighter rank approximation. ACM CIKM’15.
  • [Larsen2004] Larsen, R. M. 2004. Propack-software for large and sparse svd calculations. Available online. URL http://sun. stanford. edu/rmunk/PROPACK 2008–2009.
  • [Lee et al.2014] Lee, J.; Bengio, S.; Kim, S.; Lebanon, G.; and Singer, Y. 2014. Local collaborative ranking. In Proceedings of the 23rd international conference on World wide web, 85–96. ACM.
  • [Li and Zaïane2004] Li, J., and Zaïane, O. R. 2004. Combining usage, content, and structure data to improve web site recommendation. In E-Commerce and Web Technologies. Springer. 305–315.
  • [Linden, Smith, and York2003] Linden, G.; Smith, B.; and York, J. 2003. Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing, IEEE 7(1):76–80.
  • [Lu et al.2014] Lu, C.; Tang, J.; Yan, S.; and Lin, Z. 2014. Generalized nonconvex nonsmooth low-rank minimization. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, 4130–4137. IEEE.
  • [Melville, Mooney, and Nagarajan2002] Melville, P.; Mooney, R. J.; and Nagarajan, R. 2002. Content-boosted collaborative filtering for improved recommendations. In AAAI/IAAI, 187–192.
  • [Ning and Karypis2011] Ning, X., and Karypis, G. 2011. Slim: Sparse linear methods for top-n recommender systems. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, 497–506. IEEE.
  • [Pan et al.2008] Pan, R.; Zhou, Y.; Cao, B.; Liu, N. N.; Lukose, R.; Scholz, M.; and Yang, Q. 2008. One-class collaborative filtering. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, 502–511. IEEE.
  • [Recht, Xu, and Hassibi2008] Recht, B.; Xu, W.; and Hassibi, B. 2008. Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization. In Decision and Control, 2008. CDC 2008. 47th IEEE Conference on, 3065–3070. IEEE.
  • [Rendle et al.2009] Rendle, S.; Freudenthaler, C.; Gantner, Z.; and Schmidt-Thieme, L. 2009. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 452–461. AUAI Press.
  • [Ricci, Rokach, and Shapira2011] Ricci, F.; Rokach, L.; and Shapira, B. 2011. Introduction to recommender systems handbook. Springer.
  • [Shi and Yu2011] Shi, X., and Yu, P. S. 2011. Limitations of matrix completion via trace norm minimization. ACM SIGKDD Explorations Newsletter 12(2):16–20.
  • [Srebro and Salakhutdinov2010] Srebro, N., and Salakhutdinov, R. R. 2010. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, 2056–2064.
  • [Yang and Yuan2013] Yang, J., and Yuan, X. 2013. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation 82(281):301–329.
  • [Yu et al.2009] Yu, K.; Zhu, S.; Lafferty, J.; and Gong, Y. 2009. Fast nonparametric matrix factorization for large-scale collaborative filtering. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, 211–218. ACM.
  • [Zhong et al.2015] Zhong, X.; Xu, L.; Li, Y.; Liu, Z.; and Chen, E. 2015. A nonconvex relaxation approach for rank minimization problems. In Twenty-Ninth AAAI Conference on Artificial Intelligence.

References

  • [Breese, Heckerman, and Kadie1998] Breese, J. S.; Heckerman, D.; and Kadie, C. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In

    Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence

    , 43–52.
    Morgan Kaufmann Publishers Inc.
  • [Candès and Recht2009] Candès, E. J., and Recht, B. 2009. Exact matrix completion via convex optimization. Foundations of Computational mathematics 9(6):717–772.
  • [Cheng, Yin, and Yu2014] Cheng, Y.; Yin, L.; and Yu, Y. 2014. Lorslim: Low rank sparse linear methods for top-n recommendations. In Data Mining (ICDM), 2014 IEEE International Conference on, 90–99. IEEE.
  • [Cremonesi, Koren, and Turrin2010] Cremonesi, P.; Koren, Y.; and Turrin, R. 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, 39–46. ACM.
  • [Degemmis, Lops, and Semeraro2007] Degemmis, M.; Lops, P.; and Semeraro, G. 2007. A content-collaborative recommender that exploits wordnet-based user profiles for neighborhood formation. User Modeling and User-Adapted Interaction 17(3):217–255.
  • [Deshpande and Karypis2004] Deshpande, M., and Karypis, G. 2004. Item-based top-n recommendation algorithms. ACM Transactions on Information Systems (TOIS) 22(1):143–177.
  • [Desrosiers and Karypis2011] Desrosiers, C., and Karypis, G. 2011. A comprehensive survey of neighborhood-based recommendation methods. In Recommender systems handbook. Springer. 107–144.
  • [Fazel, Hindi, and Boyd2003] Fazel, M.; Hindi, H.; and Boyd, S. P. 2003.

    Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices.

    In American Control Conference, 2003. Proceedings of the 2003, volume 3, 2156–2162. IEEE.
  • [Good et al.1999] Good, N.; Schafer, J. B.; Konstan, J. A.; Borchers, A.; Sarwar, B.; Herlocker, J.; and Riedl, J. 1999. Combining collaborative filtering with personal agents for better recommendations. In AAAI/IAAI, 439–446.
  • [Hu, Koren, and Volinsky2008] Hu, Y.; Koren, Y.; and Volinsky, C. 2008. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, 263–272. IEEE.
  • [Kang and Cheng2015] Kang, Zhao ang Peng, C., and Cheng, Q. 2015. Robust pca via nonconvex rank approximation. In Data Mining (ICDM), 2015 IEEE International Conference on, 211–220. IEEE.
  • [Kang et al.2015] Kang, Z.; Peng, C.; Cheng, J.; and Cheng, Q. 2015. Logdet rank minimization with application to subspace clustering. Computational intelligence and neuroscience 2015:68.
  • [Kang, Peng, and Cheng2015a] Kang, Z.; Peng, C.; and Cheng, Q. 2015a. Robust subspace clustering via robust subspace clustering via smoothed rank approximation. SIGNAL PROCESSING LETTERS, IEEE 22(11):2088–2092.
  • [Kang, Peng, and Cheng2015b] Kang, Z.; Peng, C.; and Cheng, Q. 2015b. Robust subspace clustering via tighter rank approximation. ACM CIKM’15.
  • [Larsen2004] Larsen, R. M. 2004. Propack-software for large and sparse svd calculations. Available online. URL http://sun. stanford. edu/rmunk/PROPACK 2008–2009.
  • [Lee et al.2014] Lee, J.; Bengio, S.; Kim, S.; Lebanon, G.; and Singer, Y. 2014. Local collaborative ranking. In Proceedings of the 23rd international conference on World wide web, 85–96. ACM.
  • [Li and Zaïane2004] Li, J., and Zaïane, O. R. 2004. Combining usage, content, and structure data to improve web site recommendation. In E-Commerce and Web Technologies. Springer. 305–315.
  • [Linden, Smith, and York2003] Linden, G.; Smith, B.; and York, J. 2003. Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing, IEEE 7(1):76–80.
  • [Lu et al.2014] Lu, C.; Tang, J.; Yan, S.; and Lin, Z. 2014. Generalized nonconvex nonsmooth low-rank minimization. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, 4130–4137. IEEE.
  • [Melville, Mooney, and Nagarajan2002] Melville, P.; Mooney, R. J.; and Nagarajan, R. 2002. Content-boosted collaborative filtering for improved recommendations. In AAAI/IAAI, 187–192.
  • [Ning and Karypis2011] Ning, X., and Karypis, G. 2011. Slim: Sparse linear methods for top-n recommender systems. In Data Mining (ICDM), 2011 IEEE 11th International Conference on, 497–506. IEEE.
  • [Pan et al.2008] Pan, R.; Zhou, Y.; Cao, B.; Liu, N. N.; Lukose, R.; Scholz, M.; and Yang, Q. 2008. One-class collaborative filtering. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, 502–511. IEEE.
  • [Recht, Xu, and Hassibi2008] Recht, B.; Xu, W.; and Hassibi, B. 2008. Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization. In Decision and Control, 2008. CDC 2008. 47th IEEE Conference on, 3065–3070. IEEE.
  • [Rendle et al.2009] Rendle, S.; Freudenthaler, C.; Gantner, Z.; and Schmidt-Thieme, L. 2009. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 452–461. AUAI Press.
  • [Ricci, Rokach, and Shapira2011] Ricci, F.; Rokach, L.; and Shapira, B. 2011. Introduction to recommender systems handbook. Springer.
  • [Shi and Yu2011] Shi, X., and Yu, P. S. 2011. Limitations of matrix completion via trace norm minimization. ACM SIGKDD Explorations Newsletter 12(2):16–20.
  • [Srebro and Salakhutdinov2010] Srebro, N., and Salakhutdinov, R. R. 2010. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, 2056–2064.
  • [Yang and Yuan2013] Yang, J., and Yuan, X. 2013. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation 82(281):301–329.
  • [Yu et al.2009] Yu, K.; Zhu, S.; Lafferty, J.; and Gong, Y. 2009. Fast nonparametric matrix factorization for large-scale collaborative filtering. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, 211–218. ACM.
  • [Zhong et al.2015] Zhong, X.; Xu, L.; Li, Y.; Liu, Z.; and Chen, E. 2015. A nonconvex relaxation approach for rank minimization problems. In Twenty-Ninth AAAI Conference on Artificial Intelligence.