Over the past decades, recommender systems have caught much attention and gained significant accuracy improvement. Recommender systems have been widely known as one of the key technologies for various online services like E-commerce and social media sites to predict users’ preference based on their interaction histories. Two kinds of data are widely used to represent the interaction histories, explicit feedback data and implicit feedback data. Explicit feedback data is like numerical “multi-class” scores that users rate for each interacted item, and implicit feedback data is like “purchase” or “browse” in E-commerce sites, “like” in social media sites, “click” in advertisement, etc. It is the binary data indicating if a user has interacted with a certain item. In real-world applications, implicit feedback data is easier to collect and more generally applicable. However, it is more challenging to analyze, since there are only positive samples and unvoted samples while we cannot distinguish between the negative samples and unlabeled positive samples from the unvoted ones.
Bayesian Personalized Ranking (BPR) method (Rendle et al., 2009; He and McAuley, 2016b; Yu et al., 2018) is a widely used optimization criterion on implicit feedback data due to its outstanding performance. However there is a critical issue, all unvoted samples are regarded as negative equally in BPR. In fact, a user did not vote an item may not because he/she dislikes it, but just because he/she has not seen it yet. These positive samples are mislabeled as negative ones in existing sampling strategies, which leads to a serious noisy-label problem. To improve sampling quality, we explore the noisy-label robust learning in recommendation tasks. For each unobserved user-item interaction, we also label it as negative samples yet estimate the possibility of being mislabeled. The likelihood of observation is factorized into the likelihood of true labels and the likelihood of label noise. We learn the likelihood of true label and the likelihood of label noise jointly by maximizing the likelihood of observation. We can learn the true labels from the contaminated observed labels in this way, and make prediction with the true label we get.
However, there are several obstacles: (1) The most extensively used optimization method, BPR, is non-extendable to the noisy-label robust version. To deal with this issue, we introduce a Bayesian Point-wise Optimization (BPO
) as our basic optimization method. (2) The space cost of saving label flipping probabilities for all unobserved user-item interactions is unacceptable. To deal with this issue, we argue that the probability matrix is a low-rank matrix thus can be maintained by Matrix Factorization (MF). (3) The log-likelihood is extensively used as maximum likelihood estimator while the log surrogate function does not work well in this situation. We design a new surrogate function and derivation strategy to address this issue.
As we mentioned above, BPR is incompatible with noisy-label robust learning due to the form of pairwise learning, therefore we introduce a point-wise optimization method BPO to learn models, which is the maximum posterior estimator derived by the Bayesian analysis. Similar with BPR, BPO aims to estimate the model parameters by maximizing the likelihood of observations hence the model tends to predict users’ preference that conforms to the observed labels. Then, we take the label noise into account, and propose the Noisy-label robust BPO (NBPO). NBPO also maximizes the likelihood of observed sample labels while connects the true label likelihood with the observed label likelihood by the label noise likelihood. In NBPO, models are supervised by the predicted true labels thus preference prediction tends to conform to the estimated true labels.
Finally, we validate the effectiveness of our proposed method by comparing it with several baselines on the Amazon.Electronics and Movielens datasets. Extensive experiments show that we improve the performance significantly by exploring noisy-label robust learning.
Specifically, our main contributions are summarized as follows:
We propose the noisy-label robust sampler, NBPO, by taking the noisy label probabilities into consideration. We represent and train the probabilities of noisy labels with a form of MF to reduce the space and time cost.
To learn the model, we propose a novel optimization method with surrogate likelihood function and surrogate gradient. The parameters are updated by stochastic gradient descent (SGD).
We devise comprehensive experiments on two real-world datasets to demonstrate the effectiveness of our proposed methods. Codes for this paper are available on https://github.com/Wenhui-Yu/NBPO.
2. Related Work
After collaborative filtering (CF) model was proposed (Koren, 2009; Sarwar et al., 2001; Goldberg, 1992), recommender systems have gained great development. Modern recommender systems uncover the underlying latent factors that encode the preference of users (Koren, 2009; Rendle et al., 2009; Rendle and Schmidt-Thieme, 2010). Among various CF methods, MF, (Koren, 2009; Bennett and Lanning, 2007; Koren and Bell, 2011), a special type of latent factor models, is a basic yet the most effective recommender model. Recent years, many variants have been proposed to strengthen the presentation ability of recommendation models. (Bhargava et al., 2015; Chen et al., 2016; Acar et al., 2011)
proposed tensor factorization models for context-aware recommendation.(He and McAuley, 2016b; McAuley et al., 2015a; Yu et al., 2018; McAuley and Leskovec, 2013; Chen et al., 2017a) leveraged various side information to recommend by incorporating features into basic models. (He et al., 2016; Zhang et al., 2016) proposed fast algorithms to provide online recommendation. (He et al., 2017; He and Chua, 2017; Guo et al., 2017)
explored neural networks to learn user and item embeddings and how to combine them.(He et al., 2018b; Chen et al., 2017b; He et al., 2018a; Yu and Qin, 2020)
leveraged several advanced networks, such as the attention neural network, convolutional neural network, and graph convolutional neural network, to enhance the representation ability. Though widely explored, recommendation on implicit feedbacks is still a challenging task due to poor negative sampling.
2.1. Sampling on Implicit Feedback Data
There is a large quantity of literature focusing on optimization on implicit feedback data (Rendle et al., 2009; Gantner et al., 2011; Zhao et al., 2014; Yuan et al., 2018; Qiu et al., 2016; Pan and Chen, 2013; Qiu et al., 2014). Rendle et al. (2009) proposed BPR to optimize recommendation models on implicit feedback data, in which the item recommendation is treated as a ranking task rather than a regression task. BPR treats unvoted samples as negative samples equally and aims to maximize the likelihood of pairwise preference over positive samples and negative samples. As we argued that many unvoted samples are in fact unlabeled positive samples, BPR suffers from the noisy-label problem, i.e., some of the samples are indeed positive while labeled as “0”. To enhance the performance on implicit feedback data, many research efforts focus on high-quality negative sampling strategies.
Yuan et al. (2018) argued that selecting negative samples randomly in stochastic gradient descent (SGD) leads to noise in training and deteriorates the performance of the model. To address this gap, all unvoted entities are sampled as negative samples. In this way, only the noise caused by randomly sampling are handled, yet the noisy-label problem in implicit feedback data still exists, since unvoted items are still sampled as the negative uniformly. Gantner et al. (2011) proposed a weighted BPR (WBPR) method which weights each negative item with a confidence coefficient. They argued that popular samples that unvoted by a certain user are more likely to be the real negative samples since they are unlikely to be neglected. Authors then devised a weight coefficient based on items’ popularity and sampled all negative items non-uniformly. However, this weight strategy is empirical and impacts from users are ignored.
To improve sampling quality, some literature designed samplers with collaborative information. It is assumed that all users are independent in BPR, Pan and Chen (2013) tried to relax this constraint and proposed a method called group preference-based BPR (GBPR), which models the preference of user groups. Qiu et al. (2014) constructed the preference chain of item groups for each user. Liu et al. (2018) uncovered the potential (unvoted) positive samples with collaborative information and enhanced BPR with the preference relationship among positive samples, potential positive samples, and negative ones. They also calculated a weight for each potential positive sample based on the similarity to the positive sample and finally proposed a collaborative pairwise learning to rank (CPLR) model. CPLR alleviates the noisy-label problem by uncovering and weighting the potential positive samples. However it is also empirical and the memory-based part of the model does not jointly work well with the model-based part in some cases. Yu and Qin (2019) mined the collaborative information by a high-level approach to uncover the potential positive samples, yet the eigen-decomposition of two large Laplacian matrices are computationally expensive.
There are also some efforts improving sampling quality with additional information. Ding et al. (2018) used browsing history to enrich positive samples. Cao et al. (2007); Liu et al. (2014) constructed the list-wise preference relationship among items with the explicit feedback data. Zhang et al. (2013); Rendle and Freudenthaler (2014) proposed dynamic negative sampling strategies to maximize the utility of a gradient step by choosing “difficult” negative samples. Hwang et al. (2016) utilized both implicit and explicit feedback data to improve the quality of negative sampling. Yu et al. (2020)
replaced negative sampling by transfer learning.
In this paper, we explore the noisy-label robust regression technique to devise an adaptive sampler for negative samples. We weight samples with the probabilities of noisy labels and learn the probabilities jointly with the model. Compared with existing work, our weight mechanism is data-driven and more comprehensive: we weight all user-item tuples and the weight strategy is based on the Bayes formula rather than simple multiplication.
2.2. Noisy-label Robust Learning
Noisy-label issue is a critical issue in supervised machine learning tasks. Label noise misleads models and deteriorates the performance. There is an increasing amount of research literature that aims to address the issues regarding to learning from samples with noisy class label assignments(Bootkrajang and Kaban, 2012; Elkan and Noto, 2008; Plessis et al., 2014; Yi et al., 2017).
Bootkrajang and Kaban (2012)
proposed a noisy-label robust regression, which tries to learn the classifier jointly with estimating the label flipping probability. The likelihoods of real labels and of observed labels are connected by the flipping probability. Authors finally maximized the likelihood of observation to estimate all parameters. Positive and unlabeled (PU) data can be regarded as a kind of noisy-label data, in which we mainly consider the probability that positive samples are mislabeled as negative ones.Ghasemi et al. (2012)
proposed an active learning algorithm for PU data, which works by separately estimating probability density of positive and unlabeled points and then computing expected value of informativeness to get rid of a hyperparameter and have a better measure of informativeness.Plessis et al. (2014) proposed a cost-sensitive classifier, which utilizes a non-convex loss to prevent the superfluous penalty term in the objective function. Hsieh et al. (2015) proposed a matrix completion method for PU data, which can be used in recommendation tasks. However, the impact of different users and items on label flipping probabilities are neglected, different samples share the same probabilities.
In recommendation field, noisy-label problem is more serious than other supervised learning fields. That is because in real-world scenarios, users only voted a small proportion of their interested items thus most of the positive items are labeled as negative samples mistakenly. In this paper, We explore noisy-label robust learning to devise adaptive sampler for implicit feedback data, and our label flipping probabilities are sample-specific. We also propose effective learning strategy for our method.
In this section we introduce preliminaries of noisy-label robust learning and BPO. Bold uppercase letters refer to matrices. For example, is a matrix, is the -th row of , is the -th column of , and is the value at the -th row and the -th column of
. Bold lowercase letters refer to vectors. For example,is a vector, is the -th value of the vector, and is the -th vector. Italic letters refer to numbers. In this paper, we use / to indicate the observed label, use / to indicate the true label and use / to indicate the prediction returned by the model.
3.1. Noisy-label Robust Learning
Consider a training dataset containing samples, , where is a -dimensional feature and
is a binary observed label (containing noise). In the conventional logistic regression, the log likelihood is define as:
is the parameter vector of the classifier. Presuming all observed labels are correct, we use them to supervise the model training and yield the model to predict as observed labels. The probability distribution of the observed labelcan be represented as , and
. Here we leverage sigmoid functionas the surrogate function.
However, if label noise presents in the data, noisy-label robust learning should be leveraged to alleviate the impact of noise. We use variable to represent the true label of the -th sample, and the probability distribution of the observed label is:
where , is the probability that a label has flipped from into the observed label . Instead of Equation (1), we define the likelihood function with label noise as:
here we use the true labels to supervise model training thus the probability distribution of a true label can be presented as and . We learn the parameter and the noisy-label probabilities by maximizing the in Equation (2), and classify a new data point depending on the probability distribution of the true label .
Comparing Equations (1) and (2) we can see that aiming to minimize the possible risk on training set, both the conventional regression and noisy-label robust regression maximize the likelihood of the observed labels. In conventional regression, the observed labels are equivalent to the true labels while in noisy-label robust regression, the observed labels and the true labels are connected by the noisy-label probability based on the Bayes’ theorem. In this paper, we consider that unlabeled positive samples in implicit feedback data could be mislabeled as negative samples, thus explore noisy-label robust regression in recommendation tasks to improve the sampling quality. In existing noisy-label robust learning methods (Bootkrajang and Kaban, 2012; Hsieh et al., 2015), for certain and , all samples share the same label flipping probability , while in this paper we argue that the probability of label noise varies from user to user and item to item, thus we maintain a specific label flipping probability for the -th sample ().
3.2. Bayesian Point-wise Optimization
Since the widely-used method BPR is incompatible to the noisy-label robust learning, we first introduce a BPO learning instead of BPR as the basic optimization method. Here, we use a binary variable matrixto represent the observed labels, where and are numbers of users and items respectively. if the user has voted the item , otherwise . We aim to predict the missing values of (samples labeled with “0”) by reconstructing it in a low-rank form. We use to represent the set of observed interactions in .
The Bayes formula is used in the point-wise regression to construct the maximum posterior estimator: , where represents the parameters of an arbitrary model (e.g., MF). We assume all samples are independent to each other hence the likelihood function can be rewritten as a product of single probability distributions:
where is the prediction returned by the model. Equation (3.2) gives the formula of , and now we also give the formula of
. Assume that the random variables
follow normal distribution:, where is a column vector concatenated by all columns of , and
is the variance-covariance matrix of. the probability density of is:
where is the element number of (also of ). To reduce the number of unknown hyperparameters, we set , and we then have . and in aforementioned formulas indicate the determinant and the Frobenius norm of the matrix, respectively. The objective function is the log likelihood of parameters given observations:
where is a constant irrelevant to . We learn the parameter by maximizing the likelihood function in Equation (4).
4. Noisy-label Robust Sampler
In this section, we propose our noisy-label robust recommendation optimization, NBPO, by incorporating the noisy-label robust learning into BPO. We first give the formulation and then the learning strategy.
4.1. NBPO Formulation
As represented in Equation (2), we need to maintain a noisy-label probability set in conventional noisy-label robust regression, which contains all transition probabilities from to . Noting since the probabilities are defined for all samples, we use / to indicate an arbitrary element in matrix /. Since we consider that the implicit feedback data in recommendation context is a kind of PU data, we set and directly, and only consider and . As , we only need to learn one noisy-label probability in NBPO.
In conventional noisy-label robust regression introduced in Subsection 3.1, the noisy-label probabilities are not sample-specific. However, in recommendation scenarios, the probability of an item being neglected is user- and item-sensitive, i.e., varies with different items and different users. For example, popular items are less likely to be missed (Gantner et al., 2011), and users spend more time browsing items would have a less probability to miss what they like. It also depends on user habits and item properties. Considering aforementioned reasons, we learn different noisy-label probabilities for different samples, i.e., we maintain a probability matrix in NBPO. In this paper, we use to denote the noisy-label probabilities: . Now we rewrite the likelihood function in Equation (3.2) and log likelihood function in Equation (4).
The likelihood function is:
Comparing Equations (3.2) and (5), we come to the conclusion that when learning the model, we want the preference predictions to be supervised by observed labels in BPO, yet by predicted true labels in NBPO. The observed labels and true labels are linked by noisy-label probabilities . The log likelihood function is:
Expectation-Maximization algorithm (EM) is widely used to solve noisy-label robust learning tasks. Nevertheless EM is inefficient to solve large-scale latent variables and is not extensible to deep models, thus we aim to design a SGD-based method. However, experiments show that optimizing Equation (6) with SGD is rather suboptimal (shown in Figure 9). Inspired by EM, we construct the lower bound of Equation (6):
In Equation (7), we use the Jensen inequality111If are positive numbers which sum to 1 and is a real continuous function that is concave, we have the Jensen inequality: . In Equation (7), we set and ; ; then we have:
Obviously, NBPO suffers from an oversized-parameter issue. The matrix is too large to store and to learn. To address this issue, we need to reduce the number of probability hyperparameters. We argue that since similar users/items have similar habits/properties, noisy-label probabilities of similar users/items are linearly dependent, thus is a low-rank matrix and we can represent it by a Collaborative Filtering (CF) model: , where is the reconstruction of and indicates parameters of the CF model. In NBPO, is called model parameters and is called optimization parameters. We use to replace in Equation (7) and . We maximize Equation (7) by mini-batch stochastic gradient descent (MSGD): and .
4.2. NBPO Learning
To learn the model with NBPO, we still face a critical issue: when optimized by maximum likelihood estimator, log surrogate function does not fit the situation of noisy-label robust recommendation. To be specific, when learning and by maximizing , which is represented in Equation (7), we get:
It is obvious that and are learnt separately. is widely used to surrogate the likelihood function due to the special properties, for example, it converts multiplication to addition. However, this property degrades the performance of noisy-label robust learning. We want to control the magnitude of gradient when update , while impacted by the log function, learning and are two independent procedures. To deal with this issue, we remove from in Equation (7) to get a new surrogate likelihood function :
However, a new issue appears: without
, surrogate sigmoid function faces vanishing gradient problem. Different from the vanishing gradient problem in deep learning, here we use “vanishing gradient” to indicate the phenomenon that the gradient becomes 0 at two ends of the domain. For example, if we want to maximizewith SGD, we update by . However, the gradient becomes 0 when . That is, cannot be trained to the maximum with SGD when current is very small. To deal with this issue, we use surrogate differential operator to replace for all sigmoid function terms in Equation (8). The surrogate gradient of Equation (8) is finally given by222We use to denote the surrogate gradient with respect to . The surrogate gradient of is .:
We then update and by and .
4.3. Learning MF with NBPO
To show how NBPO optimization works, we now optimize a specific model, MF, with it (denoted as MF-NBPO). MF is a basic yet very effective model. It predicts users’ preference by reconstructing the purchase record matrix in a low-rank form: , where is the reconstruction of , and are latent factor matrices of users and items, and . indicates the latent factors which encode the preference of user and indicates the latent factors which encode the properties of item . is also represented by MF, , where is the reconstruction of ; and ; and . Figure 1 shows the structure of our MF_NBPO model. predicts the true labels and predicts the label flipping probability. We then predict the observed labels with and , and finally maximize the likelihood function shown in Equation (8).
The first order partial differential in Equation (9) is:
Inspired by pairwise optimization strategy (Rendle et al., 2009; He et al., 2017), we adapt similar negative sampling strategy in our point-wise optimization (BPO and NBPO): we do not select all unvoted samples as the negative when updating the model; we just select unvoted samples randomly for each positive sample, where is the sampling rate. With the gradient given in Equations (9) and (10), we can learn MF_NBPO with MSGD.
To make a prediction, we use the true labels to score all items for a certain user. As aforementioned, the prediction of the true label likelihood is . For a certain user , we rank all items by and return the top- items. Considering that increases monotonically, we rank items by descending . As we can see, parameters only assistant to learn while do not contribute to prediction directly.
An important merit of our NBPO is that it is updated by gradient descent method hence is scalable to deep models, which are widely used in real-world applications. In NBPO, we can choose more powerful models such as Neural Matrix Factorization (NMF) (He et al., 2017) and Attentive Collaborative Filtering (ACF) (Chen et al., 2017b) to construct and . In this paper, we choose the most simple model, MF, to emphasize the effectiveness of our noisy-label robust optimization method. Learning deep models by NBPO is left to explore in the future work.
In this section, we conduct experiments to demonstrate the effectiveness of our proposed model. We report the performances of several state-of-the-art models and our model on two real-world public datasets to illustrate the precision enhancement. We focus on answering following research questions:
RQ1: How is the performance of our noisy-label robust recommendation model with point-wise optimization (NBPO)?
RQ2: How is the performance enhancement by taking noisy labels into consideration?
RQ3: How is the effectiveness of our surrogate likelihood function and surrogate gradient?
5.1. Experimental Setup
In this subsection, we introduce the datasets, baselines, evaluation protocols, and parameter tuning strategies.
In this paper, we adopt two real-world datasets, Amazon333http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz and Movielens444http://grouplens.org/datasets/movielens/1m/, to learn all models.
Amazon. This Amazon dataset (He and McAuley, 2016a; McAuley et al., 2015b) is the user reviews collected from the E-commerce website Amazon.com. In this paper we adopt the Electronic category, which contains the purchase records of electronic products in Amazon.com. We choose the 5-core version (remove users and items with less than 5 purchase records).
MovieLens. This Movielens dataset (Harper and Konstan, 2015) is collected through the movie website movielens.umn.edu. This movie rating dataset has been widely used to evaluate CF models. 1M version is adapted in our experiments.
These two datasets are all explicit feedbacks (rating data), we set the interaction as “1” if rated and “0” otherwise to construct implicit feedbacks. Table 1 shows some statistics of datasets. As shown in the table, though filtered with 5-core, the sparsity of Amazon dataset is still extremely high, thus we filter it further with 14-core (we select 14 to balance the sparsity and the size). We split each dataset into three subsets randomly: training set (80%), validation set (10%), and test set (10%). We train models on training sets, and determine all hyperparameters on validation sets, and finally report the performances on test sets. Cold items and users (items and users with no record in training set) in validation and test sets are removed.
We adopt the following methods as baselines for performance comparison to demonstrate the feasibility and effectiveness of our model.
ItemPop: This method ranks items based on their popularity. It is a non-personalized method to benchmark the recommendation performances.
ItemKNN: This is the standard item-based CF method (Sarwar et al., 2001). We use this memory-based CF model to benchmark the performances of model-based CF models.
BPR: This Bayesian Personalized Ranking method is the most widely used ranking-based method for implicit feedback (Rendle et al., 2009). It regards unvoted samples as negative samples uniformly and maximizes the likelihood of users’ preference over a pair of positive and negative samples.
WBPR: This Weighted Bayesian Personalized Ranking me-thod (Gantner et al., 2011) is an extension of BPR. WBPR improves the quality of negative sampling depending on the item popularity. Considering that popular items are unlikely to be neglected, WBPR gives larger confidence weights to negative samples with higher popularity.
ShiftMC: Plessis et al. (2014) proposed a density function to deal with the noisy label problem. Following (Plessis et al., 2014), Hsieh et al. (2015) proposed the Shifted Matrix Completion method by exploring the density function in CF model. ShiftMC is the state-of-the-art PU data-faced recommendation model.
Strictly speaking, methods proposed in our paper (BPO, NBPO) and several baselines (BPR, WBPR, ShiftMC) are optimization methods which can be used to optimize any recommendation models, such as MF, Factorization Machine (FM), Neural Collaborative Filtering (NCF), etc. In this paper, we validate the effectiveness of all optimization methods by training MF model with them, thus should be denoted as “MF-BPR”, “MF-WBPR”, “MF-ShiftMC”, “MF-BPO”, and “MF-NBPO”. In the rest of this paper, we omit “MF-” for concise representation.
5.1.3. Evaluation Protocols
To evaluate the performances of our proposed model and baselines in implicit feedback context, we rank all items for each user in validation/test set and recommend the top- items to the user. We then adopt two metrics, -score and normalized discounted cumulative gain (NDCG) to evaluate the recommendation quality.
-score, which is defined as harmonic mean of precision and recall, is extensively used to test the accuracy of a binary classifier. NDCG is a position-sensitive metric widely used to measure the ranking quality. We recommend top-and calculate metrics for each user, and finally use the average metrics of all users to remark the performance of the models.
5.1.4. Parameter Setting
In this subsection, we introduce the detailed parameter tuning strategy. The maximum iteration number is set to 200. In each iteration, we enumerate all positive samples and select negative samples for each positive one randomly to train the model and then test it. We tune all models according to the performance of recommending top-2 items in the validation set. For fair comparison, all models in our experiments are tuned with the same strategy: the learning rate and regularization coefficient () are determined by grid search in the coarse grain range of and then in the fine grain range, which is based on the result of coarse tuning. For example, if a certain model achieves the best performance when and , we then tune it in the range of . We set in NBPO in this stage, and then determine them in fine grain grid search. The batch size is determined in the range of and the sampling rate is searched in the range of . We evaluate different number of latent factors and in the range of and , respectively. We conduct our experiments by predicting top- items to users.
5.2. Performance of NBPO (RQ1)
In Figure 2, we repeat each model 10 times and report average performances in our experiments. To focus on our model, curves of some uncompetitive baselines, such as ItemPop and ItemKNN, are not completely shown, or even not shown. Comparing Figures 2(a)(b) and (c)(d), it is obvious that the dataset with higher sparsity shows lower performances. ItemPop is a very weak baseline since it is very rough and simple. We can see that it cannot be shown (completely) in Figure 2 due to the poor performance. Utilizing collaborative information, all personalized methods outperform ItemPop dramatically. Among these CF models, ItemKNN is a rule-based recommendation strategy thus is empirical, and it only explores one-order connections in the user-item graph. Compared with ItemKNN, model-based CF models, i.e., BPR, WBPR, ShiftMC, NBPO explore high-order collaborative information, thus gain further enhancement in most cases. An interesting observation is that in Figure 2(a), all learning models peak at while ItemKNN peaks at . The reason may be that learning models are tuned according to -score@2, thus may not achieve the best performance when is large. It also leads to another phenomenon: the gaps among these learning models reduces with the increasing of , since they are not well-tuned for top-20 item recommendation. To get the best performance for large , we can retune models according to -score@20.
By finding credible negative samples, WBPR gains better performance than BPR. However the weight mechanism of WBPR is empirical and rough, thus the enhancement is very limited: WBPR outperforms BPR 3.75% for the best case on -score and 4.05% on NDCG. Taking the label noise of the PU data into consideration, ShitfMC performs the best in baselines: it outperforms BPR by 6.26% on -score and 4.66% on NDCG for the best case. However, the label noise probability is not sample-specific thus there is still room for improvement. Benefiting from the sample-specific label noise probabilities and the novel optimization strategy, the improvement of NBPO is significant: it outperforms ShitfMC 7.28% and 6.79% on -score and NDCG respectively for the best case.
To make a fair comparison, we tune all models with the same strategy (please see Subsection 5.1.4). To report the result of model tuning, we show the variation of -score@2 with respect to different hyperparameters.
The sensitivity analysis of learning rate and regulation coefficient () is shown in Figure 3. To save space, we only report the fine tuning of NBPO. From Figure 3 we can observe that NBPO achieves the best performance at , on Amazon dataset and , on Movielens dataset. We also report the best learning rate and regulation coefficient for other models: on Amazon, BPR, WBPR, ShiftMC all achieve the best performance when and , and on Movielens, BPR, WBPR, ShiftMC achieve the best performance when and , 0.5, 0.5, respectively.
Models’ representation abilities depend on the number of latent dimensions (In NBPO, we model users’ preference with only, and is just used to help learning the model, hence the representation ability depends on rather than ). The impact of is represented in Figure 4. Comparing Figures 4(a) and 4(b) we can see that performances of models increase with the increasing of obviously on Movielens dataset while keep stable on Amazon dataset. This may be because Amazon suffers a more serious sparsity problem, thus models easily face the overfitting problems, and stronger representation ability (larger ) may worsen this issue. All models achieve the best performance when on Amazon dataset and on Movielens dataset.
We also tune for all models. As shown in Figure 5, BPR, WBPR, ShiftMC, NBPO perform the best when , 4, 3, 3, respectively on Amazon dataset and when on Movielens dataset. An observation that attracts our interests is that compared with baselines, NBPO gains more improvement by tuning with . The reason may be that the negative sampling quality is better in our NBPO model, thus sampling more negative items can boost the performance. While in BPR, sampling more negative items leads to more serious noisy-label problem, thus the improvement is limited.
5.3. Effectiveness of Noisy-label Robust Learning in Recommendation (RQ2)
In this subsection, we validate the effectiveness of exploring noisy-label robust learning in recommendation tasks. NBPO is compared against the basic optimization method BPO and the performances are reported in Figure 6. By modifying noisy labels, NBPO gains considerable accuracy improvement. NBPO outperforms BPO 8.08% and 16.49% for the best case on Amazon and Movielens datasets.
In this subsection, we also show some details of NBPO tuning. We analyze the sensitivity of NBPO with varying regulation coefficients and in Figure 7. As illustrated, when and on Amazon and and on Movielens dataset, NBPO achieves the best performance. From Figure 7 we can observe that NBPO is more sensitive with than with , that is possibly because NBPO models users’ preference with parameters while optimizes with , thus contribute to the performance more directly.
The impact of latent dimensions is illustrated in Figure 8. When and 500, NBPO performs the best on Amazon and Movielens datasets, respectively.
A very interesting thing is that when , our model also outperforms BPR. In this situation, Equation (9) degenerates to:
In BPO and BPR, we train to and train to (where is the positive sample and is the negative sample). Updated by the gradient in Equation (11), is trained to while is trained to , where is the inverse function of . It is a simple way to weaken the negative samples. By setting in NBPO, we can also gain performance enhancement without any additional time and space consumption.
5.4. Effectiveness of the Surrogate Likelihood Function and Surrogate Gradient (RQ3)
In this subsection, we validate the effectiveness of our key optimization techniques — the surrogate objective function and surrogate gradient-based SGD method, by comparing these three models:
Figure 9 shows our NBPO models optimized by different methods. As we can see, NBPO-o performs pretty bad since it cannot be optimized well with vanilla MSGD. NBPO-s suffers from the “gradient vanishing” problem and performs the worst. To deal with the new issue, we further propose the surrogate gradient to update the model and optimization parameters. Enhanced by the surrogate likelihood function and the surrogate gradient, NBPO-ss performs the best in these three models on all metrics and all datasets.
6. Conclusion and Future Work
In this paper, we investigated the effectiveness of the noisy-label robust learning in recommendation domain. We first proposed BPO as our basic optimization method which maximizes the likelihood of observations, and then devised the NBPO model by exploring the noisy-label robust learning in BPO. In NBPO, we constructed the maximum likelihood estimator with the likelihood of users’ preference and the likelihood of label flipping, and then estimated model parameters (user and item embeddings) and optimization parameters (label flipping likelihoods) by maximizing the estimator. To deal with the oversize issue of optimization parameters, we represented the likelihood matrix with MF.
To be extensible to deep models for real-world applications and to improve the efficiency, we proposed a SGD-based optimization method. However vanilla SGD shows unsatisfactory performance in optimizing our maximum likelihood estimator. To address this gap, we maximized the lower bound of the original objective function inspired by EM algorithm, and we designed surrogate function and surrogate gradient for updating. Extensive experiments on challenging real-world datasets show that our model can improve the sampling quality and outperforms state-of-the-art models significantly.
For future work, we have interests in validating the effectiveness of NBPO with some advanced models, such as Factorization Machine (FM) (Rendle, 2011) and some deep structures like Neural Matrix Factorization (NMF) (He et al., 2017) and Attentive Collaborative Filtering (ACF) (Chen et al., 2017b). To handle the incompatibility issue of BPR, we proposed BPO as the basic optimization method, yet it shows weaker performance compared against BPR. We will devise a stronger basic optimization method, or combine noisy-label robust learning with other widely used optimization methods to improve the performance. Finally, noting that PU data is common in information retrieval tasks, such as text retrieval, web search, social network, etc., we have interests in extending the proposed optimization strategy to these fields.
- Acar et al. (2011) Evrim Acar, Tamara G. Kolda, and Daniel M. Dunlavy. 2011. All-at-once Optimization for Coupled Matrix and Tensor Factorizations. Computing Research Repository - CORR (2011).
- Bennett and Lanning (2007) James Bennett and Stan Lanning. 2007. The netflix prize. In Proceedings of Knowledge Discovery and Data Mining (KDD) Cup and Workshop. New York, NY, USA, 35.
- Bhargava et al. (2015) Preeti Bhargava, Thomas Phan, Jiayu Zhou, and Juhan Lee. 2015. Who, What, When, and Where: Multi-Dimensional Collaborative Recommendations Using Tensor Factorization on Sparse User-Generated Data. In International World Wide Web Conference (WWW ’15). 130–140.
- Bootkrajang and Kaban (2012) Jakramate Bootkrajang and Ata Kaban. 2012. Label-Noise Robust Logistic Regression and Its Applications. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML ’12). 143–158.
- Cao et al. (2007) Zhe Cao, Tao Qin, Tie Yan Liu, Ming Feng Tsai, and Hang Li. 2007. Learning to rank:from pairwise approach to listwise approach. In International Conference on Machine Learning (ICML ’07). 129–136.
- Chen et al. (2017b) Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. 2017b. Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention. In International Conference on Research and Development in Information Retrieval (SIGIR ’17). 335–344.
- Chen et al. (2016) Xu Chen, Zheng Qin, Yongfeng Zhang, and Tao Xu. 2016. Learning to Rank Features for Recommendation over Multiple Categories. In International Conference on Research and Development in Information Retrieval (SIGIR ’16). 305–314.
- Chen et al. (2017a) Xu Chen, Yongfeng Zhang, Qingyao Ai, Hongteng Xu, Junchi Yan, and Zheng Qin. 2017a. Personalized Key Frame Recommendation. In International Conference on Research and Development in Information Retrieval (SIGIR ’17). 315–324.
- Ding et al. (2018) Jingtao Ding, Fuli Feng, Xiangnan He, Guanghui Yu, Yong Li, and Depeng Jin. 2018. An Improved Sampler for Bayesian Personalized Ranking by Leveraging View Data. In International World Wide Web Conference (WWW ’18). 13–14.
- Elkan and Noto (2008) Charles Elkan and Keith Noto. 2008. Learning classifiers from only positive and unlabeled data. In International Conference on Knowledge Discovery and Data Mining (KDD ’08). 213–220.
- Gantner et al. (2011) Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2011. Bayesian personalized ranking for non-uniformly sampled items. In Proceedings of Knowledge Discovery and Data Mining (KDD) Cup and Workshop (KDDCup ’11).
- Ghasemi et al. (2012) Alireza Ghasemi, Hamid R. Rabiee, Mohsen Fadaee, Mohammad T. Manzuri, and Mohammad H. Rohban. 2012. Active Learning from Positive and Unlabeled Data. In IEEE International Conference on Data Mining Workshops (ICDM ’12). 244–250.
- Goldberg (1992) David Goldberg. 1992. Using collaborative filtering to weave an information tapestry. Communications of the Acm (1992), 61–70.
et al. (2017)
Huifeng Guo, Ruiming
Tang, Yunming Ye, Zhenguo Li, and
Xiuqiang He. 2017.
DeepFM: A Factorization-Machine based Neural
Network for CTR Prediction. In
International Joint Conference on Artificial Intelligence(IJCAI ’17). 1725–1731.
- Harper and Konstan (2015) F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. (2015), 19:1–19:19.
- He and McAuley (2016a) Ruining He and Julian McAuley. 2016a. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). 507–517.
- He and McAuley (2016b) Ruining He and Julian McAuley. 2016b. VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback. In AAAI Conference on Artificial Intelligence (AAAI ’16). 144–150.
- He and Chua (2017) Xiangnan He and Tat-Seng Chua. 2017. Neural Factorization Machines for Sparse Predictive Analytics. In International Conference on Research and Development in Information Retrieval (SIGIR ’17). 355–364.
- He et al. (2018a) Xiangnan He, Xiaoyu Du, Xiang Wang, Feng Tian, Jinhui Tang, and Tat-Seng Chua. 2018a. Outer Product-based Neural Collaborative Filtering. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI ’18). 2227–2233.
- He et al. (2018b) Xiangnan He, Zhenkui He, Jingkuan Song, Zhenguang Liu, Yu Gang Jiang, and Tat Seng Chua. 2018b. NAIS: Neural Attentive Item Similarity Model for Recommendation. IEEE Transactions on Knowledge & Data Engineering (2018), 1–1.
- He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural Collaborative Filtering. In International World Wide Web Conference (WWW ’17). 173–182.
- He et al. (2016) Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. 2016. Fast Matrix Factorization for Online Recommendation with Implicit Feedback. In International Conference on Research and Development in Information Retrieval (SIGIR ’16). 549–558.
- Hsieh et al. (2015) Cho-Jui Hsieh, Nagarajan Natarajan, and Inderjit Dhillon. 2015. PU Learning for Matrix Completion. In Proceedings of the 32nd International Conference on Machine Learning (ICML ’15). 2445–2453.
- Hwang et al. (2016) W. Hwang, J. Parc, S. Kim, J. Lee, and D. Lee. 2016. “Told you i didn’t like it”: Exploiting uninteresting items for effective collaborative filtering. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE ’16). 349–360.
- Koren (2009) Yehuda Koren. 2009. The bellkor solution to the netflix grand prize. Netflix prize documentation (2009), 1–10.
- Koren and Bell (2011) Yehuda Koren and Robert Bell. 2011. Advances in Collaborative Filtering. Springer US. 145–186 pages.
- Liu et al. (2018) Hongzhi Liu, Zhonghai Wu, and Xing Zhang. 2018. CPLR: Collaborative Pairwise Learning to Rank for Personalized Recommendation. Knowledge-Based Systems (2018).
- Liu et al. (2014) Juntao Liu, Caihua Wu, Yi Xiong, and Wenyu Liu. 2014. List-wise probabilistic matrix factorization for recommendation. Information Sciences (2014), 434–447.
- McAuley and Leskovec (2013) Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM Conference on Recommender Systems (RecSys ’13). 165–172.
- McAuley et al. (2015a) Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015a. Image-Based Recommendations on Styles and Substitutes. In International Conference on Research and Development in Information Retrieval (SIGIR ’15). 43–52.
- McAuley et al. (2015b) Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015b. Image-Based Recommendations on Styles and Substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’15). 43–52.
- Pan and Chen (2013) Weike Pan and Li Chen. 2013. GBPR: group preference based Bayesian personalized ranking for one-class collaborative filtering. In International Joint Conference on Artificial Intelligence (IJCAI ’13). 2691–2697.
- Plessis et al. (2014) M. C. Du Plessis, G. Niu, and M. Sugiyama. 2014. Analysis of learning from positive and unlabeled data. In International Conference on Neural Information Processing Systems (NIPS ’14). 703–711.
- Qiu et al. (2016) Huihuai Qiu, Guibing Guo, Jie Zhang, Zhu Sun, Hai Thanh Nguyen, and Yun Liu. 2016. TBPR: Trinity Preference Based Bayesian Personalized Ranking for Multivariate Implicit Feedback. In International Conference on User Modeling, Adaptation, and Personalization (UMAP ’16). 305–306.
- Qiu et al. (2014) Shuang Qiu, Jian Cheng, Ting Yuan, Cong Leng, and Hanqing Lu. 2014. Item Group Based Pairwise Preference Learning for Personalized Ranking. In International Conference on Research and Development in Information Retrieval (SIGIR ’14). 1219–1222.
- Rendle (2011) Steffen Rendle. 2011. Factorization Machines. In International Conference on Data Mining (ICDM ’11). 995–1000.
- Rendle and Freudenthaler (2014) Steffen Rendle and Christoph Freudenthaler. 2014. Improving Pairwise Learning for Item Recommendation from Implicit Feedback. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining (WSDM ’14). 273–282.
- Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Conference on Uncertainty in Artificial Intelligence (UAI ’09). 452–461.
- Rendle and Schmidt-Thieme (2010) Steffen Rendle and Lars Schmidt-Thieme. 2010. Pairwise Interaction Tensor Factorization for Personalized Tag Recommendation. In International Conference on Web Search and Data Mining (WSDM ’10). 81–90.
- Sarwar et al. (2001) Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In International World Wide Web Conference (WWW ’01). 285–295.
- Yi et al. (2017) Jinfeng Yi, Cho Jui Hsieh, Kush Varshney, Lijun Zhang, and Yao Li. 2017. Positive-Unlabeled Demand-Aware Recommendation. Computing Research Repository - CORR (2017).
- Yu et al. (2020) Wenhui Yu, Xiao Lin, Junfeng Ge, Wenwu Ou, and Zheng Qin. 2020. Semi-supervised Collaborative Filtering by Text-enhanced Domain Adaptation. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20).
- Yu and Qin (2019) Wenhui Yu and Zheng Qin. 2019. Spectrum-enhanced Pairwise Learning to Rank. In International World Wide Web Conference (WWW ’19). 2247–2257.
- Yu and Qin (2020) Wenhui Yu and Zheng Qin. 2020. Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. In Proceedings of the 37nd International Conference on Machine Learning (ICML ’20).
- Yu et al. (2018) Wenhui Yu, Huidi Zhang, Xiangnan He, Xu Chen, Li Xiong, and Zheng Qin. 2018. Aesthetic-based Clothing Recommendation. In International World Wide Web Conference (WWW ’18). 649–658.
- Yuan et al. (2018) Fajie Yuan, Xin Xin, Xiangnan He, Guibing Guo, Weinan Zhang, Tat-Seng Chua, and Joemon M. Jose. 2018. : Learning Embeddings from Positive Unlabeled Data with BGD. In Conference on Uncertainty in Artificial Intelligence (UAI ’18).
- Zhang et al. (2016) Hanwang Zhang, Fumin Shen, Wei Liu, Xiangnan He, Huanbo Luan, and Tat Seng Chua. 2016. Discrete Collaborative Filtering. In International Conference on Research and Development in Information Retrieval (SIGIR ’16). 325–334.
- Zhang et al. (2013) Weinan Zhang, Tianqi Chen, Jun Wang, and Yong Yu. 2013. Optimizing top-n collaborative filtering via dynamic negative item sampling. In International Conference on Research and Development in Information Retrieval (SIGIR ’13). 785–788.
- Zhao et al. (2014) Tong Zhao, Julian McAuley, and Irwin King. 2014. Leveraging Social Connections to Improve Personalized Ranking for Collaborative Filtering. In ACM International Conference on Information and Knowledge Management (CIKM ’14). 261–270.