Product reviews, primarily texts, are an important information source for consumers to make purchase decisions. Hence, it makes great economical sense to quantify the quality of reviews and present consumers more useful reviews in an informative manner. Growing efforts from both academia and industry have been invested on the task of review helpfulness prediction [Martin and Pu2014, Yang et al.2015, Yang et al.2016, Liu et al.2017a].
Pioneering work hypothesizes that helpfulness is an underlying property of the text, and uses handcrafted linguistic features to study it. For example, [Yang et al.2015] and [Martin and Pu2014] examined semantic features like LIWC, INQUIRER, and GALC. Subsequently, aspect- [Yang et al.2016] and argument-based [Liu et al.2017a] features are demonstrated to improve the prediction performance.
Inspired by the remarkable performance of Convolutional Neural Networks (CNNs) on many tasks in natural language processing, here we employ CNN for review helpfulness prediction task. To better enhance the performance of a vanilla CNN over this task, besidesword-level representation, we further leverage multi-granularity information, i.e., character- and topic-level representations. Character-level representations are notably beneficial for alleviating the out-of-vocabulary problem [Ballesteros et al.2015, Kim et al.2016, Chen et al.2018], while aspect distribution provides another semantic view on the words [Yang et al.2016].
One research question here is whether embeddings shall be treated equally in the CNN. Intuitively, different words contribute to the helpfulness of a review in different intensity or importance levels. For example, descriptive or semantic words (such as “great battery life” or “versatile function”) are more informative than general background words like “phone”. Correspondingly, we propose a mechanism called word-level gating to weight embeddedings111 The gates are applied over all three types of word representations (i.e., character-, word-, and topic-based) for all words.
. Gating mechanisms have been commonly used to control the amount that a unit updates the activation or content in recurrent neural networks[Chung et al.2014]. Our word-level gates can be automatically learned in our model and help differentiate the important and non-important words. The resulting model is referred to as Embedding-Gated CNN (EG-CNN).
A gating mechanism empowers CNN in two folds. First, extensive experiments show that our proposed EG-CNN model greatly outperforms hand-crafted features, ensemble models of hand-crafted features, and vanilla CNN models. Second, such gating mechanism selectively memorizes the input representations of the words and scores the relevance/importance of such representations, to provide insightful word-level interpretations for the prediction results. The greater a gate weights, the more relevant the corresponding word is to review helpfulness.
It is common that some product domains/categories have rich user reviews while other domains do not. For example, the “Electronics” domain from Amazon.com Review Dataset [McAuley and Leskovec2013] has more than 354k labeled reviews, while “Watches” has less than 10k. Exploiting cross domain relationships to systematically transfer knowledge from related domains with sufficient labeled data will benefit the task on domains with limited reviews. It is worth noting that, existing studies on this task only focus on a single product category or largely ignore the inner correlations between different domains. In previous work, some features are domain-specific while others can be shared. For example, image quality features are only useful for cameras [Yang et al.2016], while semantic features and argument-based features are applicable to all domains [Yang et al.2015, Liu et al.2017a].
must be established before the knowledge can be transferred properly in our task. Otherwise, transferring the knowledge from a wrong source domain may backfire. We thus provide a holistic solution to both domain correlation learning and knowledge transfer learning by incorporating a domain relationship learning module in our framework. Experiments show that our final model can correctly tap into domain correlations and facilitate the knowledge transfer between correlated domains to further boost the performance.
We define review helpfulness prediction as a regression task to predict the helpfulness score given a review. The ground truth of helpfulness is determined using the “a of b approach”: a of b users think a review is helpful.
Formally, we consider a cross-domain review helpfulness prediction task where we have a set of labeled reviews from a set of source domains and a target domain. We seek to transfer the knowledge from other domains with rich data to a target domain. For a review , our goal is to predict its helpfulness score , where is the domain label indicating which domain the data instance is from.
2.1 Word, Character, and Aspect Representations
A review consists of a sequence of words, i.e., . Following the CNN model in [Kim2014], for words in a review , we first lookup the embeddings of all words from a embedding matrix where is vocabulary size and is embedding dimension, and . This word embedding matrix is then fed into a convolutional neural network to obtain an output representation. This is a typical word embedding based model.
In many applications, such as text classification [Bojanowski et al.2016] and machine reading comprehension [Seo et al.2016], it is beneficial to enrich word embeddings with subword information. Inspired by that, we consider to use a character embedding layer to obtain character embeddings to enrich word representations. Specifically, the characters of the -th word
are embedded into vectors and then fed into another convolutional neural network to obtain a fixed-sized vector.
A recent work in [Yang et al.2016] shows that extracting the aspect/topic distribution from the raw textual contents does help the task of review helpfulness prediction. The reason is that many helpful reviews tend to talk about certain aspects, like ‘brand’, ‘functionality’, or ‘price’, of a product. Inspired by this, we enrich our word representations by aspect distributions. We adopt the model in [Yang et al.2016] to learn aspect-word distribution , where is aspect size and is the size of vocabulary. A word-aspect representation is obtained by row-wise normalization of the matrix . Then for each word in input review , we obtain aspect representation by looking up the matrix to get .
Formally, for an input review , we obtain its representation as:
where , , and represent word-level, character-level, and topic-level representations respectively, and is a stacking operator. Note that ( in this paper) is the sentence length limit. Sentences shorter than
words will be padded while sentences longer thanwords will be truncated.
2.2 Embedding-gated CNN (EG-CNN)
Because some words play more important roles in review helpfulness prediction, for example, descriptive or semantic words (such as “great battery life” or “versatile function”) will be more informative than general background words like ‘phone’. Hence, we propose to weight the input word embeddings. Specifically we propose a gating mechanism to weight each word in our model. The word-level gate is obtained by feeding the input embeddings to a gating layer, where the gating layer is essentially a fully-connected layer with weight and bias .
Formally, for input , we obtain its representation as follows:
is a sigmoid activation function.
Next, we stack a 2-D convolutional layers and a 2-D max-pooling layers on the matrix
to obtain the hidden representation. Multiple filters are used here. For each filter, we obtain a hidden representation:
where is window size, is embedding dimension, is channel size, and represents a convolutional layer followed by a max-pooling layer. All the representations will then be concatenated to form the final representation . We refer our base model as Embedding-Gated CNN (EG-CNN), where EG-CNN learns a hidden feature representation for an input .
2.3 Cross-Domain Relationship Learning
If we treat all the domains as the same domain, we can build an unified model for our task. Specifically, our target here is to optimize the following objective:
where is the output layer, is the input from domain , is the corresponding label, is a regularization term.
The formulation in Eqn. (5) is limited because it does not take the difference of domains into consideration. To utilize the multi-domain knowledge, we convert the method above to a multi-domain setting where we assume an output layer for each domain . While still a unified model to learn universal feature representation, our new approach has two output layers and to model domain commonalities and differences respectively.
Furthermore, we explicitly model a domain correlation matrix , where is the correlation between domains and . Following the matrix-variate distribution setting from [Zhang and Yeung2010], our objective is to optimize the trace of the matrix product . This shows, when domain and domain are close, i.e., is close to , the model tends to learn a large in order to minimize the trace. In all, our objective is defined as follows:
where gets the trace of a matrix, is a regularization term, and are weight coefficients.
Our final model is presented in Figure 1, where we use EG-CNN as our base model, and further consider cross-domain correlation and multiple domain training. Note that, if we set
as an identity matrix (no domain correlation) and(no shared output layer), the multi-domain setting is degenerated to a fully-shared setting in [Mou et al.2016]. The limitation of the fully-shared setting is that it ignores domain relationships. However, in practise, we may think “Electronics” is helpful to “Home” and “Cellphones” domains, but may not be so helpful as for “Watches” domain. With our model, we seek to automatically capture such domain relationships and use that information to help boost model performance.
# of reviews ( 5 votes)
# of reviews
to set the topic size to 100. For EG-CNN, the activation function is ReLU, the channel size is set to 128, and AdaGrad[Duchi et al.2011] is used in training with an initial learning rate of 0.08.
Following the previous work, all experiment results are evaluated using correlation coefficients between the predicted helpfulness score and the ground truth score. The ground truth scores are computed by “a of b approach” from the dataset, indicating the percentage of consumers thinking a review as useful.
3.1 Comparison with Linguistic Feature Baselines and CNN Models
Our proposed EG-CNN model is compared with the following baselines:
Fusion: ensemble model with “STR”, “UGR”, “LIWC”, and “INQ” features [Yang et al.2015]
Fusion : Fusion with additional “ASP” features [Yang et al.2016].
CNN: the vanilla CNN model [Kim2014] with word-level embedding only;
CNN: the vanilla CNN model with character-based representation [Chen et al.2018];
CNN: the vanilla CNN model with character- and topic- based representations.
EG-CNN: our final model with word-level, character-level, and topic-level representations in a gating mechanism.
Table 2 shows several interesting observations that validate our motives behind this work. First, all the CNN based models consistently outperform non-CNN models, indicating their expressiveness over handcrafted features. Second, CNN outperforms CNN when data is relatively insufficient (e.g., the domains “Watches” and “Phones”) and loses its edge on domains of abundant data (e.g., the domain “Electronics”). This is because when data size is smaller, the out-of-vocabulary problem (OOV) is more severe, and character-based representation is more beneficial. Third, CNN consistently outperforms CNN, showing that adding topic-based representations can further help the task. Last but not least, our proposed EG-CNN outperforms all CNN variants, which justifies the necessity of adding embedding gates. This further supports the importance of considering embedding gates. In all cases, EG-CNN significantly outperforms the baselines and yields better results than all CNN variants.
3.2 Comparison with Cross-domain Models
To evaluate the effectiveness of our domain relationship learning, we compare our proposed full model against the following two baselines: the target-only model that uses only data from the target domain, and the fully-shared model that uses a fully shared neural network [Mou et al.2016] for all domains. All three models use EG-CNN as the base model.
In all experiments, our model consistently achieves better results than both target-only and fully-shared models, supporting the effectiveness and benefit of cross-domain relationship learning. The improvement is greater on domains with fewer labeled data, e.g., the “Watches” domain. The “Watches” domain has the least number of reviews and our model shows the most improvement there.
Interestingly, the fully-shared model performs much worse than the target-only model in the “Home” domain. This might be justified by the potential domain shift, under which the fully-shared model may not perform better than the target-only model. Because some domains are related while some others are not, incorporating data from those less related can hardly help, especially when the target domain (such as “Home”) has sufficient data for the target-only model to perform well enough.
4 Related Work
Recent studies on review helpfulness prediction extract handcrafted features from the review texts. For example, [Yang et al.2015] and [Martin and Pu2014] examined semantic features like LIWC, INQUIRER, and GALC. Subsequently, aspect- [Yang et al.2016] and argument-based [Liu et al.2017a] features are demonstrated to improve the prediction performance. These methods require prior knowledge and human effort in feature engineering and may not be robust for new domains. In this work, we employ CNNs [Kim2014, Zhang et al.2015]
for the task, which is able to automatically extract deep features from raw text content. As character-level representations are notably beneficial for alleviating the out-of-vocabulary[Ballesteros et al.2015, Kim et al.2016], while aspect distribution provides another semantic view on words [Yang et al.2016], we further enrich the word representation of CNN by adding multi-granularity information, i.e., character- and aspect-based representations. As different words may play different importance on the task, we consider to weight word representations by adding word-level gates. Gating mechanisms have been commonly used in recurrent neural networks to control the amount of unit update the activation or content and have demonstrated to be effective [Chung et al.2014, Dhingra et al.2016]. Our word-level gates help differentiate important and non-important words. The resulting model, referred to as embedding-gated CNN, has shown to significantly outperform the existing models.
It is common that some domains have rich user reviews while other domains may not. To help domains with limited data, we study cross-domain learning (transfer learning [Pan and Yang2010] or multi-task learning [Zhang and Yang2017]
) for this task. Transfer learning and multi-task learning have been extensively studied in the last decade. With the popularity of deep learning, a great amount of Neural Network (NN) based methods are proposed for TL[Yosinski et al.2014]. A typical framework is to use a shared NN to learn shared features for both source and target domains [Mou et al.2016, Yang et al.2017]. Another approach is to use both a shared NN and domain-specific NNs to derive shared and domain-specific features [Liu et al.2017b]. A multi-task relationship learning method is introduced in [Zhang and Yeung2010], which is able to uncover the relationship between domains. Inspired by this, we adopt the relationship learning module to our EG-CNN framework to help model the correlation between different domains.
To the best of our knowledge, our work is the first to propose gating mechanism in CNN and to study cross-domain relationship learning for review helpfulness prediction.
In this work, we tackle review helpfulness prediction using two new techniques, i.e., embedding-gated CNN and cross-domain relationship learning. We built our base model on CNN with word-, character- and topic-based representations. On top of this model, domain relationships were learned to better transfer knowledge across domains. The experiments showed that our model significantly outperforms the state of the art.
- [Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A Smith. Improved transition-based parsing by modeling characters instead of words with lstms. arXiv preprint arXiv:1508.00657, 2015.
- [Bojanowski et al.2016] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. CoRR, abs/1607.04606, 2016.
- [Chen et al.2018] Cen Chen, Yinfei Yang, Jun Zhou, Xiaolong Li, and Forrest Sheng Bao. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 602–607. Association for Computational Linguistics, 2018.
- [Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
- [Dhingra et al.2016] Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016.
[Duchi et al.2011]
John Duchi, Elad Hazan, and Yoram Singer.
Adaptive subgradient methods for online learning and stochastic
Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
- [Kim et al.2016] Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. In AAAI, pages 2741–2749, 2016.
- [Kim2014] Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
- [Liu et al.2017a] Haijing Liu, Yang Gao, Pin Lv, Mengxue Li, Shiqiang Geng, Minglan Li, and Hao Wang. Using argument-based features to predict and analyse review helpfulness. In EMNLP, pages 1358–1363, Copenhagen, Denmark, September 2017.
- [Liu et al.2017b] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Adversarial multi-task learning for text classification. In ACL, 2017.
[Martin and Pu2014]
Lionel Martin and Pearl Pu.
Prediction of Helpful Reviews Using Emotions Extraction.
Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI’14), pages 1551–1557, 2014.
- [McAuley and Leskovec2013] J McAuley and J Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. Proceedings of the 7th ACM conference on Recommender systems - RecSys ’13, pages 165–172, 2013.
- [Mou et al.2016] Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. How transferable are neural networks in nlp applications? In EMNLP, 2016.
- [Pan and Yang2010] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010.
- [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
- [Seo et al.2016] Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603, 2016.
- [Yang et al.2015] Yinfei Yang, Yaowei Yan, Minghui Qiu, and Forrest Bao. Semantic analysis and helpfulness prediction of text for online product reviews. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 38–44, Beijing, China, July 2015. Association for Computational Linguistics.
- [Yang et al.2016] Yinfei Yang, Cen Chen, and Forrest Sheng Bao. Aspect-based helpfulness prediction for online product reviews. In 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), pages 836–843, Nov 2016.
- [Yang et al.2017] Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. Transfer learning for sequence tagging with hierarchical recurrent networks. ICLR, 2017.
- [Yosinski et al.2014] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
- [Zhang and Yang2017] Yu Zhang and Qiang Yang. A survey on multi-task learning. arXiv preprint arXiv:1707.08114, 2017.
- [Zhang and Yeung2010] Yu Zhang and Dit-Yan Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI, 2010.
- [Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657, 2015.