1. Introduction and Related Work
The field of recommender systems has experienced extensive growth in many research directions over the last decade (Adomavicius and Tuzhilin, 2005; Ricci et al., 2015), including the area of multi-criteria recommendations (Adomavicius and Kwon, 2015; Sahoo et al., 2012; Jannach et al., 2012, 2014). For example, several prominent websites from Zagat to TripAdvisor collect multi-criteria ratings to measure quality of items shown on their sites that can subsequently be used for recommendation purposes.
Previous researchers try to improve accuracy of multi-criteria recommender systems in various ways, including Support Vector Regression (Jannach et al., 2012) or the halo effect (Sahoo et al., 2012). However, most of the methods do not take the information contained in user reviews into account, which could potentially alleviate the sparsity problem and improve quality of recommendations, as pointed out in (Chen et al., 2015). Some researchers propose to utilize aspect information (Bauman et al., 2017; Musto et al., 2017; Cheng et al., 2018; Wang et al., 2010) from user reviews for multi-criteria recommendations. Although useful, these papers do not consider the latent semantic information contained in user reviews, which is beneficial for understanding users’ true experiences and expectations (Zhang et al., 2019). Besides, user reviews contain high-dimensional user feedback information beyond aspects, while the multi-criteria ratings collected by the online platforms are limited by low-dimensional pre-defined criteria, which may not fully represent multiplicity of user experiences. Moreover, some of the multi-criteria ratings might be missing, thus limiting performance of explicit multi-criteria rating methods (Adomavicius and Kwon, 2015).
To address these concerns, it’s natural to map the user reviews into latent embeddings and incorporate these embeddings into the recommendation process. However, typical review embeddings are noisy and high-dimensional, which significantly increases the difficulty of computation and optimization. To resolve this issue, prior literature proposed to compress text embeddings with reasonable semantic segmentation into low-dimensional discrete vectors (Chen et al., 2018; Shu and Nakayama, 2017)
, showing that the compressed vectors manage to achieve better performance in sentiment analysis and machine translation tasks. In this paper, we extend this idea from text analysis to multi-criteria recommender systems and propose to use these ”compressed vectors” as latent multi-criteria ratings for recommendation purposes.
Specifically, we propose to extract latent multi-criteria ratings from the user reviews using the variational autoencoder (Kingma and Welling, 2013). Furthermore, we map the reviews into latent embeddings to represent high-dimensional user experiences, and then perform embedding compression (e.g., using Gumbel-Softmax Reparameterization (Jang et al., 2016; Maddison et al., 2014)) to compress them into low-dimension discrete vectors, which constitute the latent multi-criteria ratings for recommendations. We empirically validate the proposed method on three datasets and demonstrate that our approach outperforms several important baselines consistently and significantly in terms of various recommendation accuracy measures.
Note that, the proposed latent multi-criteria ratings method has the following advantages over the method using multi-criteria ratings explicitly provided by the users:
It is not limited by the pre-defined criteria to model user experiences.
It does not require to collect multi-criteria feedback from the user.
It does not need to deal with the missing values problem for multi-criteria ratings.
It captures latent interactions between users and items (using autoencoders), thus providing several benefits reported in (Zhang et al., 2019).
In this paper, we make the following contributions. We propose a novel method that automatically generates latent multi-criteria ratings from user reviews by combining autoencoding and embedding compression techniques for multi-criteria recommendations. We also empirically demonstrate that our approach outperforms the selected baseline models consistently and significantly on three datasets across various experimental settings.
In this section, we introduce the proposed model for latent multi-criteria recommendations by combining autoencoding and embedding compression techniques, as presented in Figure 1. In Stage 1, we use the variational autoencoder to project the user reviews onto a latent continuous space; and then we utilize the embedding compression technique to compress the embedding vectors obtained in the previous stage into discrete latent ratings during Stage 2. Finally in Stage 3, we apply various multi-criteria recommendation algorithms on the latent ratings to produce recommended items.
2.1. Stage 1: Review Encoding
To project the discontinuous user reviews into latent continuous embeddings, we follow the idea of autoencoding (Sutskever et al., 2014) and implement the bidirectional GRU (Cho et al., 2014)neural networks as the encoder and the decoder respectively. Compared with classical models like RNN or LSTM, GRU is computationally more efficient and better captures semantic meanings (Chung et al., 2014).
During the training process, every word in the review is mapped to their corresponding word indexes in the pre-defined vocabulary . Both the input and the output of the constructed autoencoder are the sequences of indexes. To illustrate the GRU learning procedure, we denote as the weight matrices of current information and the past information for the update gate and the reset gate respectively. is the index vector input at the timestep , stands for the output vector, denotes the update gate status and represents the status of reset gate. The hidden state at timestep could be obtained following these equations:
By iteratively calculating hidden step throughout every time step, we could get the final hidden state at the end of the sentence, which constitutes the review embeddings that captures the latent semantic information of the user reviews.
2.2. Stage 2: Embedding Compression
Note that, review embeddings obtained in Stage 1 are noisy and of high-dimensions, which significantly affect the performance of recommendation system. This motivates us to utilize embedding compression methods for efficient learning. We denote that the embedding matrix consists of embedding vectors with dimensions, where refers to the total number of reviews in the dataset. Given the arbitrary number and , our goal is to find out discrete codes of dimension () that take integer value ranging from 1 to in the latent dimension.
Inspired by the Gumbel-Softmax reparameterization method (Jang et al., 2016; Shu and Nakayama, 2017), we conduct the embedding compression process using the compositional code learning method that parameterizes the discrete codes onto continuous distributions. The Gumbel-Softmax technique (Gumbel, 1954; Maddison et al., 2014) provides a simple and efficient way to get samples
from a categorical distribution with class probabilities:
where are drawn from the Gumbel distribution and
stands for the one-hot function that transforms the data to a binary one-hot encoding. We use the softmax function as the continuous differentiable approximation to the argmax function, so that the generated sample vectors would be:
for where represents the temperature of the softmax function. Therefore as discussed in (Shu and Nakayama, 2017), if we reverse the entire sampling process described above, we can learn the discrete codes from our continuous embeddings.
Specifically, given the number of latent dimensions , we first apply the matrix factorization (Koren et al., 2009) to get the top-K factor for that embedding matrix: where is the basis matrix for the component. Therefore, the learning of discrete codes would be equivalent to the learning of a set one-hot vectors so that Furthermore, we assume that the discrete vectors are sampled from a prior distribution via Gumbel-Softmax Reparameterization method (Shu and Nakayama, 2017) by minimizing the reconstruction loss compared with the origin embedding matrix,
where stands for the embedding matrix reconstructed from the compressed codes we get. We optimize the prior parameters and . Finally, the compressed vectors for each review embeddings could be obtained by applying to the one-hot vectors .
2.3. Stage 3: Multi-Criteria Recommendation
When we compressed continuous review embeddings into discrete latent embeddings as described in Stage 2, we select the dimension of discrete codes corresponding to the dimension of collected multi-criteria ratings, and also select the range of each latent dimension matching with the range of organic multi-criteria ratings. In that sense, we could treat these compressed vectors as the latent multi-criteria ratings and then utilize these multi-criteria ratings for recommendation purposes, shown as Stage 3 in Figure 1. Note that, unlike traditional cases, the latent multi-criteria ratings are not specified by the users but obtained from the reviews as described in the previous stages. We conduct the multi-criteria recommendations using the state-of-the-art methods introduced in (Adomavicius and Kwon, 2015).
To summarize, we propose to uniquely combine the method of variational autoencoders with the embedding compression techniques to learn the latent multi-criteria ratings from user reviews for multi-criteria recommendation purposes. In the next section, we experimentally demonstrate the superiority and effectiveness of the proposed approach.
|Yelp Round 8||Yelp Round 11||TripAdvisor|
|Algorithm||Rating||Yelp Round 11||Yelp Round 8||TripAdvisor|
3. Experiments and Results
3.1. Experimental Settings
We implement the model on the following datasets: Yelp Challenge Dataset 111https://www.yelp.com/dataset Round 8 and Round 11 of restaurant recommendations and on TripAdvisor Dataset 222http://www.cs.cmu.edu/˜jiweil/html/hotel-review.html of hotel recommendations that contain user reviews, as well as multi-criteria feedback. The difference between two Yelp datasets is the time when the data is collected: Round 8 dataset is collected in 2016, while Round 11 dataset is collected in 2018. To avoid the sparsity and the cold start problems, we only consider users that rate at least five items, restaurants that have been rated by at least ten users and hotels that have been rated by at least five users. The basic statistics of the filtered datasets are shown in Table 1. (Note that the unfiltered datasets are much larger.)
To demonstrate the effectiveness of the proposed latent multi-criteria rating (LatentMC) model, we select several multi-criteria rating models for comparison, including:
MC: the multi-criteria ratings explicitly specified by the users in the three datasets.
Overall: the overall ratings in the three dataset.
Embedding: the uncompressed review embeddings obtained in Stage 1 (see Figure 1) as latent multi-criteria ratings in the three datasets.
Aspect: the extracted aspect ratings (Bauman et al., 2017) as multi-criteria ratings in the three datasets.
Also, to evaluate recommendation performance, we implement the state-of-the-art multi-criteria recommendation models introduced in (Adomavicius and Kwon, 2015), including KNN, SlopeOne, CoCluster, Support Vector Regression (SVR) and Aggregate Function. The first three models are adjusted to their multi-criteria recommendation versions by calculating similarities using multi-criteria ratings instead of only overall ratings.
3.2. Experimental Results
We conduct the 5-fold cross-validation recommendation experiments and report the performance results on the test set. As shown in Table 2, our proposed LatentMC model outperforms the baselines consistently and significantly across different datasets, performance measures and algorithms. For example using Aggregate algorithm, the Pre@1, Pre@5, Rec@1, Rec@5 measures for the three datasets improve by 1.73%, 1.56%, 6.05%, 1.29%, 1.91%, 2.32%, 4.03%, 2.00%, 2.52%, 4.86%, 5.47% and 1.99% respectively compared to the second-best baseline models. In general, we obtain over 2% accuracy improvements in most of the cases.
Furthermore, we make the following observations. First, compared with the recommendation results directly using uncompressed review embeddings, the latent compressed multi-criteria rating method achieves better performance. The improvement is achieved through the compression process that cleans up the noisy high-dimensional embedding vectors and extracts the essence of the reviews. Second, we observe that review-based multi-criteria recommendation model performs better that the non-review recommendation model, even without using the explicit multi-criteria rating information. This indicates that user reviews contain richer and high-dimensional information, compared to low-dimensional multi-criteria ratings. Thus, it is crucial to properly model user reviews in the recommendation process. Finally, the latent rating method outperforms the explicit rating models because they capture the latent semantic information within user reviews, which supports the general advantages of using latent methods in text analysis (Zhang et al., 2019).
To conclude, the latent multi-criteria ratings generated by the proposed model achieve significantly better performance for multi-criteria recommendations in comparison to the alternative methods. In addition, it has the following natural advantages over classical multi-criteria recommendation approaches. First, it is not limited by the pre-defined criteria and missing values to model user experiences. Second, it does not require to collect multi-criteria feedback from the user. Third, it captures latent interactions between users and items, which provide several benefits reported in (Zhang et al., 2019).
As the future work, we would like to improve the latent multi-criteria rating generation process even further. Also, we plan to study the semantic meaning and interpretability of the latent multi-criteria ratings.
- Adomavicius and Kwon (2015) Gediminas Adomavicius and YoungOk Kwon. 2015. Multi-criteria recommender systems. In Recommender systems handbook. Springer, 847–880.
- Adomavicius and Tuzhilin (2005) Gediminas Adomavicius and Alexander Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge & Data Engineering 6 (2005), 734–749.
- Bauman et al. (2017) Konstantin Bauman, Bing Liu, and Alexander Tuzhilin. 2017. Aspect based recommendations: Recommending items with the most valuable aspects based on user reviews. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 717–725.
- Chen et al. (2015) Li Chen, Guanliang Chen, and Feng Wang. 2015. Recommender systems based on user reviews: the state of the art. User Modeling and User-Adapted Interaction 25, 2 (2015), 99–154.
- Chen et al. (2018) Ting Chen, Martin Renqiang Min, and Yizhou Sun. 2018. Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations. arXiv preprint arXiv:1806.09464 (2018).
- Cheng et al. (2018) Zhiyong Cheng, Ying Ding, Lei Zhu, and Mohan Kankanhalli. 2018. Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews. arXiv preprint arXiv:1802.07938 (2018).
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014).
- Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014).
- Gumbel (1954) Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications. NBS Applied Mathematics Series 33 (1954).
- Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016).
- Jannach et al. (2012) Dietmar Jannach, Zeynep Karakaya, and Fatih Gedikli. 2012. Accuracy improvements for multi-criteria recommender systems. In Proceedings of the 13th ACM conference on electronic commerce. ACM, 674–689.
- Jannach et al. (2014) Dietmar Jannach, Markus Zanker, and Matthias Fuchs. 2014. Leveraging multi-criteria customer feedback for satisfaction analysis and improved recommendations. Information Technology & Tourism 14, 2 (2014), 119–149.
- Jolliffe (2011) Ian Jolliffe. 2011. Principal component analysis. Springer.
- Kingma and Welling (2013) Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
- Koren et al. (2009) Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 8 (2009), 30–37.
- Maddison et al. (2014) Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Information Processing Systems. 3086–3094.
- Musto et al. (2017) Cataldo Musto, Marco de Gemmis, Giovanni Semeraro, and Pasquale Lops. 2017. A Multi-criteria Recommender System Exploiting Aspect-based Sentiment Analysis of Users’ Reviews. In Proceedings of the eleventh ACM conference on recommender systems. ACM, 321–325.
- Ricci et al. (2015) Francesco Ricci, Lior Rokach, and Bracha Shapira. 2015. Recommender systems: introduction and challenges. In Recommender systems handbook. Springer, 1–34.
- Sahoo et al. (2012) Nachiketa Sahoo, Ramayya Krishnan, George Duncan, and Jamie Callan. 2012. Research note—the halo effect in multicomponent ratings and its implications for recommender systems: The case of yahoo! movies. Information Systems Research 23, 1 (2012), 231–246.
- Shu and Nakayama (2017) Raphael Shu and Hideki Nakayama. 2017. Compressing Word Embeddings via Deep Compositional Code Learning. arXiv preprint arXiv:1711.01068 (2017).
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. 3104–3112.
- Wang et al. (2010) Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 783–792.
- Zhang et al. (2019) Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR) 52, 1 (2019), 5.