A Failure of Aspect Sentiment Classifiers and an Adaptive Re-weighting Solution

11/04/2019 ∙ by Hu Xu, et al. ∙ 0

Aspect-based sentiment classification (ASC) is an important task in fine-grained sentiment analysis. Deep supervised ASC approaches typically model this task as a pair-wise classification task that takes an aspect and a sentence containing the aspect and outputs the polarity of the aspect in that sentence. However, we discovered that many existing approaches fail to learn an effective ASC classifier but more like a sentence-level sentiment classifier because they have difficulty to handle sentences with different polarities for different aspects. This paper first demonstrates this problem using several state-of-the-art ASC models. It then proposes a novel and general adaptive re-weighting (ARW) scheme to adjust the training to dramatically improve ASC for such complex sentences. Experimental results show that the proposed framework is effective [%s].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Aspect-based sentiment classification (ASC) is an important task in fine-grained sentiment analysis Hu and Liu (2004); Liu (2015), which aims to detect the opinion expressed about an aspect (of an opinion target). It not only requires fine-grained annotation of aspects and their associated opinions, but also more sophisticated classification methods. Unlike document-level sentiment classification where opinion terms appear frequently in a document, so it is easier to detect the overall sentiment/opinion of the document Pang et al. (2002); Liu (2015), detecting aspect-level sentiments in short text (e.g., a sentence) requires more accurate understanding of very fine-grained opinion expressions and also correct association of them with each opinion target (or aspect). For example, “The screen is good but not the battery” requires to detect two fine-grained and contrastive opinions within the same sentence: a positive opinion towards “screen” and a negative opinion towards “battery”. We found that existing ASC models have great difficulty to correctly classify such contrastive opinions in their sentences.

Review Sentence Sent.-level Asp.-level
The screen is good. pos screen: pos
The screen is good and also the battery. pos screen: pos
battery: pos
The screen is good but not the battery. contrastive screen: pos
battery: neg
Table 1: A few sentences for ASC with both sentence-level(sent.-level) polarity and aspect-level(asp.-level) polarity: the first two sentences can leverage sentence-level polarity to answer aspect-level polarity correctly but not for the last (contrastive) sentence.

Deep supervised ASC approaches typically model ASC as a memory network Weston et al. (2014); Sukhbaatar et al. (2015); Tang et al. (2016). Given two inputs: a sentence and an aspect term appearing in , build a model , where is the opinion (or sentiment) about . From the perspective of classification, this formulation is essentially a pair-wise classification task that takes a pair of inputs and predicts the class . One challenge of pair-wise classification is the quadratic space of combinations introduced by the two inputs. This requires a huge number of critical training examples to inform the model what the learning task is and what kinds of interactions between the two inputs are necessary for that task.

For ASC, we discovered that the available datasets may not provide such rich interactions for effective supervision. In fact, we observed that lacking of sentences with contrastive opinions (we call it contrastive sentence for brevity) can make an ASC classifier downgrading to a sentence-level sentiment classifier (SSC), as intuitively explained in Table 1. By “contrastive”, we mean two or more different opinions are associated with different aspects appearing in the same sentence. After all, when showing training examples with only sentences of one or more aspects with the same opinion (or polarity), the pair-wise model (or humans) can totally ignore the aspect part and only use the sentence

to classify the aspect-level opinion correctly with an overall sentence-level opinion. Contrastive sentences are crucial for ASC, but they are infrequent, as we will see in the Dataset Analysis section. As a result, contrastive sentences are largely ignored in training and further weakly evaluated in the testing. This results in the failure of the current ASC models in correctly classifying contrastive opinions, as shown in the Experiments section. In fact, this is a general issue for most machine learning models, where the majority wins and dominates the training process and the rare but important examples can easily be ignored and may even be considered as noise, as seen in many class imbalance problems and machine learning fairness problems. For example, the object detection problem

Shrivastava et al. (2016); Lin et al. (2017)

in computer vision can easily come up with long-tailed and imbalanced classes of examples given it is almost impossible to manually rebalance objects appear within an image.

In this paper, we assume that available datasets can be easily and unintentionally imbalanced by following the distributions naturally in reviews. We propose to apply a weight to each training example, representing the importance of such an example during training. We investigate different methods of computing weights and propose a training scheme called adaptive re-weighting, which dynamically keeps the system focusing on examples from contrastive sentences. We show that a model trained with such a scheme can improve the classification of examples from contrastive sentences dramatically, while still keep competitive or even better performance on the full set of testing examples.

The main contribution is 2-fold: (1) It discovers the issue of ASC that plagues existing methods, which are clearly manifested in contrastive sentences. Such sentences are essential for the ASC task but are largely ignored. (2) It proposes a re-weighting solution that resolves the issue and improves the performance on both contrastive sentences and the full set of testing examples.

2 Dataset Analysis

We adopt the popular SemEval 2014 Task 4222http://alt.qcri.org/semeval2014/task4 datasets to demonstrate how rare those contrastive sentences are. These datasets cover two domains: laptop and restaurant. We further demonstrate that the normal training on such datasets results in poor performances on contrastive sentences in experiments.

Laptop Restaurant
Training
#Sentence 3045 2000
#Aspect 2358 1743
#Positive 987 2164
#Negative 866 805
#Neutral 460 633
#Sent. /w Asp. 1462 1978
#Contrastive Sent. 165 319
%Contrastive Sent. 11.3% 16.1%
Testing Set
#Sentence 800 676
#Aspect 654 622
#Positive 341 728
#Negative 128 196
#Neutral 169 196
#Sent. /w Asp. 411 600
#Contrastive Sent. 38 80
%Contrastive Sent. 9.2% 13.3%
Table 2: Summary of SemEval14 Task4 on aspect sentiment classification. #Sentence: number of sentences; #Aspect: number of aspects; #Positive, #Negative, and #Neutral: number of aspects with positive, negative and neutral opinions, respectively; #Sent. /w Asp.: number of sentences with at least one aspect that is associated with one of positive, negative or neutral opinion; #Contrastive Sent.: number of sentences with aspects associated with different opinions; %Contrastive Sent.: percentage of contrastive sentences in sentences with at least one aspect.

As shown in Table 2, we first examine the overall statistics of these datasets. We decompose these statistics to get deeper insights that may lead to a failed ASC classifier. We notice that although these datasets contain a moderate number of training sentences for laptop, sentences with at least one aspect (and thus has polarities of opinions) is less than 50%, as in #Sent. /w Asp.

Further, as discussed in the introduction, we are particularly interested in contrastive sentences that have more than one aspect and are associated with different opinions (#Contrastive Sent.) for each such sentence. Those sentences carry critical training examples (information) for ASC because the rest of the examples have only one polarity per sentence (even with two or more aspects), where the overall sentence-level opinion can be applied to aspect-level opinion and thus effectively downgrade the task of ASC to SSC (sentence-level sentiment classification).

We notice that contrastive sentences are rare in both training and test sets of both domains. Laptop is even more so on the shortage of contrastive sentences because of the shortage of sentences with at least one aspect. If we consider their percentage (%Constrastive Sent.), the training set of restaurant has just about 16% and the laptop has only about 11%. With the SSC-like examples dominating the training set, a machine learning model trained on such a set is susceptible to ignoring the aspect and mostly performing SSC.

Laptop Restaurant
Contrastive Test Set
#Contrastive Sent. 78 80
#Aspect 203 228
#Positive 72 85
#Negative 71 60
#Neutral 60 83
Table 3: Summary of Contrastive Test Set.

What is worse is that the test set for the laptop domain contains only 38 contrastive sentences. This further poses a challenge on evaluating the ASC capability for laptop as only contrastive sentences can evaluate the true capability of ASC. To solve this problem, two (2) annotators are asked to follow the annotation instructions of SemEval 2014 Task 4 and annotate more contrastive sentences (to have a similar number of contrastive sentences as restaurant in total) from Laptop reviews He and McAuley (2016). Disagreements are discussed until agreements are reached. The main complaint from the annotators is that finding such sentences takes a lot of time as they are infrequent. By combining the additional contrastive sentences with those contrastive sentences from the original test set, we form a new contrastive test set, dedicated to testing the true ASC capability of ASC classifiers. Note that there is no change to the training set for the laptop domain and no change to either the training or the test set of the restaurant domain. The final statistics of the contrastive sentence are shown in Table 3. To simplify our description, we refer the original test set as the full test set. Note that we DO NOT add those extra contrastive sentences into the full test set to keep the results comparable with existing approaches. We evaluate the failure of existing ASC classifiers on the contrastive test set in experiments and discuss our example re-weighting scheme that focuses training on rare contrastive sentences in the next section.

3 Adaptive Re-Weighting

In this section, we first describe the motivation for developing a new training scheme instead of following the canonical training process. Then we describe the general idea of designing the adaptive re-weighting (ARW) scheme and the detailed scheme afterward.

3.1 Motivation

Given that the examples from contrastive sentences are rare, the first question that one may ask is how a deep learning model learns from those rare examples during the existing training process. Existing research showed that rare and noisy examples are seldom optimized at the early stage of training (e.g., a few epochs

Gao and Jojic (2016). Intuitively, in the beginning, the losses from the majority examples dominate the total loss, and they determine the direction of parameter updates based on their gradients. So the losses from majority examples can be smaller in the next few iterations. At a later stage, although the loss from a rare example can be larger than the one from a majority example, the rare example still may not have enough contribution to the total loss as the loss in each batch is averaged among all examples, although the losses from the majority examples are smaller now. Also, as the rare examples can be rather diverse, it is unlikely that a similar rare example can later appear in another batch to have a similar impact. In the worst case, it is possible that the rare examples’ losses are taken care of when the optimizer starts to overfit the minor details in the majority examples. When the validation process kicks in for early stopping, which aims to avoid overfitting, it may stop training the model before rare examples are really optimized well. To demonstrate our observation, we show how many incorrectly classified training examples are from contrastive sentences in experiments.

Given this unwanted behavior of optimization, a natural idea to solve the rare example problem is to detect those examples from contrastive sentences at an earlier stage of training and increase (or rebalance) their losses much earlier before the validation process finds the best model. One natural solution to increase those losses is to give higher weights to those examples from contrastive sentences that are not optimized well. Then the total loss (per batch of optimization) is the weighted sum of losses of examples (within a batch). This process of adjusting example weights could be dynamic in nature because a used-to-be easy example can be an incorrect one later and vice versa. For example, in “The screen is good but not the battery.”, increasing the loss for aspect “battery” can make the “screen” incorrect later, leading to increase the loss for “screen” later. Further, note that although the model can easily access contrastive sentences based on the polarity labels during preprocessing/training, the model has no access to which example is from a contrastive sentence during validation or testing. Tackling those sentences must be automatically done during training.

algocf[!pt]    

3.2 Proposed Training Scheme

Given the above analysis, we aim to design an adaptive scheme that keeps adjusting the weights for losses of examples from contrastive sentences (which are known in the training set). Increasing losses can be modeled as having weights assigned to the training examples and the total loss is computed as the weighted sum of the training examples. So an example with a higher weight is more likely to contribute more to . As deep learning models are typically trained on a batch-by-batch basis, we define the total loss as the loss from a batch. Let be the example-wise losses for examples within a batch. Since a batch is randomly drawn from the training set, we re-normalize the weights for examples in that batch to avoid fluctuation caused by randomly drawing examples with weights of different magnitude.

Then the next issue is when and how to adjust the weights. We assume an uniform distribution for weights at the beginning

. A natural point of adjusting the weight for each example is at the end of training of each epoch. This is because every example has been consumed once and the model can focus on examples from contrastive sentences that are not treated well (incorrectly classified). To adjust the weights, the first step is to find incorrect examples as for , where is the prediction of the -th training example from the current model and is the ground-truth. Then we pick those incorrect examples that are from contrastive sentences as an indicator variable , where tells whether the sentence is a contrastive sentence or not. requires preprocessing to know which sentence has more than one polarity of aspects (the training data contains the information). For research question 5 (RQ5) in experiments, we perform an ablation study on whether this term is important. Note that existing research (e.g., Lin et al. (2017) in object detection) favors a continuous loss-based weighting function over the correctness-based weighting function. We realize that correctness-based weighting function is better on which examples should improve when we address RQ4 in experiments.

Now we estimate the overall weighted error rate

to detect whether the current model tends to make more mistakes on contrastive sentences or not. Note that the reason for using the weighted error rate instead of just the error rate is that the weighted error rate reflects the hardness on optimizing examples from contrastive sentences instead of simply example-level errors. We will detail the formula in the next subsection. When the weighted error rate is high (e.g.,

), instead of increasing the weights for incorrect examples from contrastive sentences, we probably need to reduce them so as to avoid learning too much noise. Lastly, the weight adjustment for incorrect examples from contrastive sentences is determined by the (correct-versus-incorrect) ratio

. So when this amount is larger than , multiplying it to increase the weights and otherwise to decrease the weights. Here we introduce a weight assignment factor

, which is a hyperparameter to control whether the model should favor even more weights (e.g.,

) or not (e.g., ). We detail the proposed ARW algorithm in the next subsection.

3.2.1 ARW Algorithm

The proposed ARW algorithm is shown in Algorithm LABEL:alg:arw. In Line 1, it initializes the weights of all training examples uniformly. Lines 2-14 pass through the training data epoch-by-epoch and update the example weights at the end of each epoch. Specifically, Line 3 retrieves one randomly sampled batch of aspects , sentences , polarity label and their (current) corresponding weights . Line 6 makes a forward pass on aspects and sentences . Then we compute example-wise loss for each training example in the batch. Line 7 computes the weighted loss and re-normalize these weights throughout the batch to get the total loss

. Line 8 does normal backpropagation and parameter updating as in ordinary neural networks training. Line 10 gets the prediction on the training set. Line 11 first discovers the hard examples represented by an indicator variable

. It then computes the weighted error rate. Line 12 computes the log of the correct-incorrect ratio. indicates increasing the weights and means decreasing the weights. Lastly, in Line 13, we only adjust the weights via the indicator variable since the weights of correctly classified (easy) examples are always multiply by . As a result, Algorithm LABEL:alg:arw keeps track of the weights for all training examples and always focuses on adjusting weights of incorrect examples from contrastive sentences. We also perform a normal validation process after each epoch (omitted in the Algorithm LABEL:alg:arw for brevity).

4 Experiments

Our experiment consists of two parts: (1) show the failure of existing approaches and (2) demonstrate the effectiveness of the ARW scheme. We focus on the following research questions (RQs):
RQ1: How is the performance of existing ASC systems on the contrastive sentences in the test data (Contrastive Test Set) ?
RQ2: What is the performance of an ASC model trained from data with manually assigned fixed higher weights to contrastive sentences only?
RQ3: How is the performance of the proposed ARW system compared with the above baselines?
RQ4: How is the performance of a loss-based weighting function (such as the famous focal loss Lin et al. (2017)) compared to ARW?
RQ5: How important is the term (in Lines 11 and 13), given it needs preprocessing to find which sentence is contrastive?
RQ6: Can ARW tackle more examples from contrastive sentences before early stopping (via the validation set) ?

4.1 Failure of Existing Approaches

4.1.1 ASC Baselines

To demonstrate existing ASC systems’ difficulty with contrastive sentences, we used a range of ASC baselines and tested their performance on examples from contrastive sentences (contrastive test set). We evaluate all baselines on both accuracy (Acc.) and macro F1 (MF1).

RAMChen et al. (2017)333The first 4 baselines are adopted from https://github.com/songyouwei/ABSA-PyTorch.. This system proposes a multiple-attention mechanism to capture sentiment features separated by a long distance so that it is more robust against irrelevant information. The weighted-memory and attention mechanism not only helps avoid the labor-intensive feature engineering work but also provides a tailor-made memory for different opinion targets of a sentence.
AOAHuang et al. (2018). This system introduces an attention-over-attention (AOA) neural network, which models aspects and sentences in a joint manner and explicitly captures the interaction between aspects and the sentence context.
MGANLi et al. (2018b). This method leverages the fine-grained and coarse-grained attention mechanisms to compose the MGAN framework. It also has an aspect alignment loss to depict the aspect-level interactions among aspects that have the same context.
TNETLi et al. (2018a). This system employs a CNN layer to extract salient features from the transformed word representations originated from a bi-directional RNN layer. Between the two layers, TNET has a component to generate target-specific representations of words while incorporating a mechanism for preserving the original contextual information from the RNN layer.
BERT-DKXu et al. (2019)444https://github.com/howardhsu/BERT-for-RRC-ABSA. This is the BERT-based model Devlin et al. (2018). It achieved the state-of-the-art results on ASC recently. Based on BERT, it first performs masked language modeling and then next sentence prediction on pre-trained BERT weights using domain (laptop or restaurant) reviews. Then it is fine-tuned using supervised ASC data. We choose BERT-DK because of its easy-to-understand implementation without extra supervised tasks (such as reading comprehension) and its performance. We further challenge this model by removing the aspects from the test example as there is no architecture change in doing so. In this way, the BERT-DK model has no way to check the aspect during testing. We want to see whether its performance on the Full Test Set is affected much or not. Note that this is not a traditional sentence-level classifier as the training process is still under ASC task.

Laptop Rest.
Acc. MF1 Acc. MF1
RAMChen et al. (2017)
on Full Test Set 74.49 71.35 80.23 70.8
on Contrastive Test Set 41.87 38.65 52.19 55.19
AOAHuang et al. (2018)
on Full Test Set 74.5 - 81.2 -
on Contrastive Test Set 42.86 33.53 42.98 33.66
MGANLi et al. (2018b)
on Full Test Set 75.39 72.47 81.25 71.94
on Contrastive Test Set 46.8 43.38 53.95 57.64
TNETLi et al. (2018a)
on Full Test Set 76.54 71.75 80.69 71.27
on Contrastive Test Set 49.75 49.86 56.58 58.05
BERT-DKXu et al. (2019)
on Full Test Set 76.9 73.65 84.21 76.2
on Full Test Set w/o aspect 76.0 73.05 80.03 72.95
on Contrastive Test Set 51.13 50.04 65.53 66.92
BERT-DK Acc. MF1 Acc. MF1
+ Manual Re-weighting
on Full Test Set 75.41 71.99 84.36 76.35
on Contrastive Test Set 53.45 52.76 68.03 69.51
+ Focal LossLin et al. (2017)
on Full Test Set 76.33 73.24 84.57 76.56
on Contrastive Test Set 51.48 50.43 66.4 67.14
+ ARW
on Full Test Set 73.71 69.63 84.5 77.58
on Contrastive Test Set 57.29 56.53 73.99 74.63
+ ARW w/ manual initial weighting
on Full Test Set 70.08 65.89 84.48 77.41
on Contrastive Test Set 55.37 54.68 75.31 75.81
+ ARW w/o
on Full Test Set 77.23 73.81 85.35 78.46
on Contrastive Test Set 61.08 60.34 71.84 72.66
Table 4: Performance of ASC baselines and the proposed ARW Scheme on both Full Test Set and Contrastive Test Set; the BERT-DK model is further tested on examples by removing aspects as in (on Full Test Set w/o aspect).

4.1.2 Baseline Result Analysis

From Table 4, we can see those existing ASC classifiers perform poorly on the contrastive test sets, which contain real ASC examples only. To answer RQ1, we find that all baselines have significant drops on both Accuracy (Acc.) and F1 score as most existing models reach more than 70% on both accuracy and F1 on the full test set. Lastly, when the aspects are dropped from the input (on Full Test Set w/o aspect), the BERT-DK ASC classifier dropped a little and still comparable to other baselines on the full test set. Since this experiment has no access to aspects but just the review sentences, it indicates that the model DOES NOT count on aspects much in doing aspect-level sentiment classification.

4.2 Arw

The results of the above subsection justify the need for evaluating ASC on the contrastive test set and the need to improve the performance on that set. Since an ideal ASC should also be fully functional on none contrastive sentences, we still need to evaluate ARW and baselines on the full test set. In this set of experiments, we focus on ARW alone with various re-weighting schemes.

4.2.1 Compared Methods

We use BERT-DK as a base model to compare the following re-weighting schemes.
+Manual Re-weighting This baseline uses pre-defined weights for examples from contrastive / non-contrastive sentences. To answer RQ2, a natural way to balance the examples from contrastive sentences and non-contrastive sentences is to use the number of examples as weights. To do so, we count the number of training examples from contrastive sentences and give them weights and other examples weights , where is the total number of training examples. So examples from contrastive sentences are expected to receive higher weights. These weights are again re-normalized within a batch. Note that we also experimented with a number of other manual weighting schemes and this method does the best.
+Focal Loss To answer RQ4, we leverage the famous focal loss in object detection. The weight for each example is computed as , where is the probability of prediction on the ground-truth label (from softmax) and is a hyper-parameter. We search this hyper-parameter and use for results.
+ARW This is the proposed training scheme, which is intended to answer RQ3.
+ARW w/ manual initial weighting We further investigate the use of +Manual Re-weighting’s weighting function as the initial weights and then use ARW for adaptive re-weighting.
+ARW w/o This is the proposed training scheme without accessing the preprocessed labels for contrastive sentences, which is intended to answer RQ5. Note that this method discovers all incorrect examples, which may include examples from contrastive sentences. We search and use for results.

4.2.2 Hyper-parameters

For all methods, we use Adam optimizer and set the learning rate to 3e-5. The batch size is set as 32. To perform model selection, we hold out 150 examples from the training set as a validation set. We experimentally found that ARW takes longer time to converge compared with the ordinary training of a BERT-based model. For the Laptop domain, it typically converges on the 8th or 9th epoch; for the restaurant domain it converges on the 5th or 6th epoch. So we set the maximum epochs to 12. All results are averaged over 10 runs.

4.2.3 ARW Result Analysis

Laptop Restaurant
# total examples 2163 3452
BERT-DK
# incorrect examples from contra. sent. 148 228
BERT-DK +ARW w/o
# incorrect examples from contra. sent. 47 201
Table 5: Number of incorrectly predicted training examples (# incorrect examples from contra. sent.) from contrastive sentences in one run of training: the training of the model is early stopped by validation set.

The results are also shown in Table 4. To answer RQ2, we observe that manual re-weighting improves the performance on laptop and restaurant about 3% for the contrastive test sets. After manual re-weighting, the performance on the full test set improves on restaurant but drops on laptop slightly. The reason could be that manual weights are not perfect for learning, which may overemphasize rare examples from contrastive sentences in the laptop training data.

To answer RQ3 and RQ5, we find that BERT-DK + ARW w/o mostly outperforms other baselines. If we compare with BERT-DK, it is around 10% of improvement for laptop and 6% for restaurant. Regarding the overall performance on the full test set, BERT-DK + ARW w/o has a marked improvement overall in the restaurant domain. When we examine the examples, the contribution is largely from neutral examples. Its performance on laptop is slightly better than BERT-DK. One reason could be that the examples from contrastive sentences are too rare compared to annotation errors in laptop. So the model learns some annotation errors. Overall, these numbers indicate that BERT-DK + ARW w/o still functions well overall based on the traditional evaluation of ASC, but significantly improves the performance on contrastive sentences which truly test the aspect-level sentiment classification ability. Further, we notice that both BERT-DK + ARW and BERT-DK + ARW w/ manual initial weighting tends to overfit the examples from contrastive sentences. It dropped a lot on sentences with singular polarity for laptop. For restaurant, BERT-DK + ARW w/ manual initial weighting has the best performance on the contrastive test set, indicating manual re-weighting yields better weights initialization than uniform weights initialization in BERT-DK + ARW.

To answer RQ4, we notice that focal loss does not perform very well for our problem. Its performance on contrastive set is slightly better than BERT-DK. We believe the reason is that the numeric number of probability cannot explicitly distinguish whether the model is making a mistake on one example or not and thus provide poor weight to examples from contrastive sentences.

To answer RQ6, we further investigate the behavior of both BERT-DK and BERT-DK + ARW w/o when their training is early stopped by the validation set, as shown in Table 5. We notice that the normal training of deep learning model (BERT-DK) naturally leaves more examples from contrastive sentences unresolved, justifying the reason why BERT-DK has poor performance on contrastive test set. BERT-DK + ARW w/o obviously takes care of more examples from contrastive sentences before validation set finds the best model.

4.2.4 Error Analysis

Regarding errors for the Contrastive Test Set, we noticed that given the limited number of contrastive sentences in training, some implicit sentiment transitions (or switching, such as no word like “but”, etc.) is hard to learn (e.g., “The screen is great and I can live with the keyboard’s slightly smaller size.”). Also, contrastive sentences with neutral polarity may be harder. This is because there may be no transition, but just one aspect with pos/neg opinion and one aspect with no opinion (neutral). We believe using larger unlabeled corpora for training could benefit the contrastive test set. We leave that to our future work. For the Full Test Set, diverse and rare opinion expressions are also a very challenging problem to solve. Further, some fine-grained or uncommon opinion expressions are even hard to recognize by human annotators, resulting in annotation errors.

5 Related Work

Aspect sentiment classification (ASC) Hu and Liu (2004) is an important task in sentiment analysis Pang et al. (2002); Liu (2015). It is different from document or sentence-level sentiment classification (SSC) Pang et al. (2002); Kim (2014); He and Zhou (2011); He et al. (2011) as it focuses on fine-grained opinion on each specific aspect Shu et al. (2017); Xu et al. (2018). It is either studied as a single task or a joint learning task together with aspect extraction Wang et al. (2017b); Li and Lam (2017); Li et al. (2019). The problem has been widely dealt with using neural networks in recent years Dong et al. (2014); Nguyen and Shirai (2015); Li et al. (2018a). Memory network and attention mechanisms are extensively applied to ASC, e.g., Tang et al. (2016); Wang et al. (2016a, b); Ma et al. (2017); Chen et al. (2017); Ma et al. (2017); Tay et al. (2018); He et al. (2018a); Liu et al. (2018); Xu et al. (2019). Memory networks Weston et al. (2014); Sukhbaatar et al. (2015) are a type of neural networks that typically require two inputs and learn to have interactions between those two inputs via attention mechanisms Bahdanau et al. (2014)

. ASC is also studied in transfer learning or domain adaptation settings, such as leveraging large-scale corpora that are unlabeled or weakly labeled (e.g., using overall rating of a review as the label)

Xu et al. (2019); He et al. (2018b); Xu et al. (2018) and transferring from other tasks/domains Li et al. (2018b); Wang et al. (2018a, b).

Contrastive opinions are studied as a topic modeling problem in Ibeke et al. (2017) to discover constrastive opinions on the same opinion target from different holders, as in discussions. However, to the best of our knowledge, existing approaches and evaluations do not focus on contrastive sentences in aspect-based sentiment classification that having opposite opinions on different aspects from the same opinion holder. But those sentences or opinions truly reveal the capability of ASC models.

The rare instance problem can be regarded as an imbalanced data problem in machine learning in general. Most existing studies in machine learning on imbalanced data focus on imbalanced classes or skewed class distributions, e.g., some classes with very few examples

Huang et al. (2016); Buda et al. (2018); Tantithamthavorn et al. (2018); Johnson and Khoshgoftaar (2019). Object detection is a popular problem Shrivastava et al. (2016); Lin et al. (2017) in computer vision for detecting long-tailed and imbalanced classes of examples given it is almost impossible to manually rebalance objects appear within an image. In Lin et al. (2017), loss-based weights are proposed to automatically adjust weights without explicitly re-balance the complex class distribution.

Our example re-weighting algorithm is related to AdaBoost Freund and Schapire (1997), which is a well-known ensemble algorithm that makes predictions collectively via a sequence of weak classifiers. When building each weak classifier, the weak learner tries to focus on the examples that are classified wrongly by the previous classifier. Weighted voting of these weak classifiers is used as the final ensemble classifier. Our work is different as we don’t build a sequence of classifiers as in AdaBoost but only one classifier. Neither is our model an ensemble model. Our weight updating is also different from AdaBoost as we do it in each epoch of training. However, our approach is in a similar spirit to that in AdaBoost on how to discover the weakness of the existing model on the training set. But we aim to improve the training process of a deep learning model by adaptively discovering incorrect examples that cover contrastive sentences and give them higher weights to focus on for subsequent training process. We also notice that AdaBoost is not frequently used in deep learning Schwenk and Bengio (2000); Mosca and Magoulas (2017) probably due to the complexity of deep learning models which are not weak learners.

Example (or instance) (re-)weighting is also leveraged in transfer learning and domain adaptation Jiang and Zhai (2007); Foster et al. (2010); Xia et al. (2013); Wang et al. (2017a) and sentiment analysis Pappas and Popescu-Belis (2014), but the purpose of weighting and weighting methods are entirely different. Re-weighting is commonly used to deal with noises in the training data as well. However, its focus is to weight down possible noisy training examples/instances Rebbapragada and Brodley (2007). It is not used to improve the hard but critical examples during training like what we do.

6 Conclusion

In this work, we observed a key failure of existing ASC classifiers. That is, they have great difficulty to classify contrastive sentences with multiple aspects and multiple different opinions, which are, in fact, the true test of aspect sentiment classifiers. We further showed that this difficulty is mainly caused by the fact that contrastive sentences are rare. One solution to this problem is to assign higher weights to such examples during training. However, instead of going for the solution that assigns higher weights manually, we proposed an automatic adaptive method ARW that discovers those incorrect examples from contrastive sentences during a certain stage of the training and adaptively assign them higher weights to improve their training. Experimental results show that our method is highly effective in handling contrastive sentences that are crucial for the ASC task and at the same time it also works very well on the full test set.

References

  • D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §5.
  • M. Buda, A. Maki, and M. A. Mazurowski (2018)

    A systematic study of the class imbalance problem in convolutional neural networks

    .
    Neural Networks 106, pp. 249–259. Cited by: §5.
  • P. Chen, Z. Sun, L. Bing, and W. Yang (2017) Recurrent attention network on memory for aspect sentiment analysis. In

    Proceedings of the 2017 conference on empirical methods in natural language processing

    ,
    pp. 452–461. Cited by: §4.1.1, Table 4, §5.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §4.1.1.
  • L. Dong, F. Wei, C. Tan, D. Tang, M. Zhou, and K. Xu (2014) Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), Vol. 2, pp. 49–54. Cited by: §5.
  • G. Foster, C. Goutte, and R. Kuhn (2010) Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of the 2010 conference on empirical methods in natural language processing, pp. 451–459. Cited by: §5.
  • Y. Freund and R. E. Schapire (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55 (1), pp. 119–139. Cited by: §5.
  • T. Gao and V. Jojic (2016) Sample importance in training deep neural networks. Cited by: §3.1.
  • R. He, W. S. Lee, H. T. Ng, and D. Dahlmeier (2018a)

    Effective attention modeling for aspect-level sentiment classification

    .
    In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1121–1131. Cited by: §5.
  • R. He, W. S. Lee, H. T. Ng, and D. Dahlmeier (2018b) Exploiting document knowledge for aspect-level sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Cited by: §5.
  • R. He and J. McAuley (2016) Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pp. 507–517. Cited by: §2.
  • Y. He, C. Lin, and H. Alani (2011) Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 123–131. Cited by: §5.
  • Y. He and D. Zhou (2011) Self-training from labeled features for sentiment analysis. Information Processing & Management 47 (4), pp. 606–616. Cited by: §5.
  • M. Hu and B. Liu (2004) Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 168–177. Cited by: §1, §5.
  • B. Huang, Y. Ou, and K. M. Carley (2018) Aspect level sentiment classification with attention-over-attention neural networks. In International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, pp. 197–206. Cited by: §4.1.1, Table 4.
  • C. Huang, Y. Li, C. Change Loy, and X. Tang (2016) Learning deep representation for imbalanced classification. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 5375–5384. Cited by: §5.
  • E. Ibeke, C. Lin, A. Wyner, and M. H. Barawi (2017) Extracting and understanding contrastive opinion through topic relevant sentences. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Taipei, Taiwan, pp. 395–400. External Links: Link Cited by: §5.
  • J. Jiang and C. Zhai (2007) Instance weighting for domain adaptation in NLP. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, pp. 264–271. External Links: Link Cited by: §5.
  • J. M. Johnson and T. M. Khoshgoftaar (2019) Survey on deep learning with class imbalance. Journal of Big Data 6 (1), pp. 27. Cited by: §5.
  • Y. Kim (2014) Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746–1751. Cited by: §5.
  • X. Li, L. Bing, W. Lam, and B. Shi (2018a) Transformation networks for target-oriented sentiment classification. arXiv preprint arXiv:1805.01086. Cited by: §4.1.1, Table 4, §5.
  • X. Li, L. Bing, P. Li, and W. Lam (2019) A unified model for opinion target extraction and target sentiment prediction. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 33, pp. 6714–6721. Cited by: §5.
  • X. Li and W. Lam (2017) Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2886–2892. Cited by: §5.
  • Z. Li, Y. Wei, Y. Zhang, X. Zhang, X. Li, and Q. Yang (2018b) Exploiting coarse-to-fine task transfer for aspect-level sentiment classification. arXiv preprint arXiv:1811.10999. Cited by: §4.1.1, Table 4, §5.
  • T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In ICCV, pp. 2980–2988. Cited by: §1, §3.2, Table 4, §4, §5.
  • B. Liu (2015) Sentiment analysis: mining opinions, sentiments, and emotions. Cambridge University Press. Cited by: §1, §5.
  • Q. Liu, H. Zhang, Y. Zeng, Z. Huang, and Z. Wu (2018) Content attention model for aspect based sentiment analysis. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 1023–1032. Cited by: §5.
  • D. Ma, S. Li, X. Zhang, and H. Wang (2017) Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893. Cited by: §5.
  • A. Mosca and G. D. Magoulas (2017) Deep incremental boosting. arXiv preprint arXiv:1708.03704. Cited by: §5.
  • T. H. Nguyen and K. Shirai (2015) PhraseRNN: phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 2509–2514. External Links: Link, Document Cited by: §5.
  • B. Pang, L. Lee, and S. Vaithyanathan (2002) Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pp. 79–86. Cited by: §1, §5.
  • N. Pappas and A. Popescu-Belis (2014) Explaining the stars: weighted multiple-instance learning for aspect-based sentiment analysis. In Proceedings of the 2014 Conference on Empirical Methods In Natural Language Processing (EMNLP), pp. 455–466. Cited by: §5.
  • U. Rebbapragada and C. E. Brodley (2007) Class noise mitigation through instance weighting. In European Conference on Machine Learning, pp. 708–715. Cited by: §5.
  • H. Schwenk and Y. Bengio (2000) Boosting neural networks. Neural computation 12 (8), pp. 1869–1887. Cited by: §5.
  • A. Shrivastava, A. Gupta, and R. Girshick (2016) Training region-based object detectors with online hard example mining. In CVPR, pp. 761–769. Cited by: §1, §5.
  • L. Shu, H. Xu, and B. Liu (2017) Lifelong learning CRF for supervised aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada, pp. 148–154. External Links: Link, Document Cited by: §5.
  • S. Sukhbaatar, J. Weston, R. Fergus, et al. (2015) End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448. Cited by: §1, §5.
  • D. Tang, B. Qin, and T. Liu (2016) Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 214–224. External Links: Link, Document Cited by: §1, §5.
  • C. Tantithamthavorn, A. E. Hassan, and K. Matsumoto (2018) The impact of class rebalancing techniques on the performance and interpretation of defect prediction models. IEEE Transactions on Software Engineering. Cited by: §5.
  • Y. Tay, L. A. Tuan, and S. C. Hui (2018) Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §5.
  • R. Wang, M. Utiyama, L. Liu, K. Chen, and E. Sumita (2017a) Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1482–1488. Cited by: §5.
  • S. Wang, G. Lv, S. Mazumder, G. Fei, and B. Liu (2018a) Lifelong learning memory networks for aspect sentiment classification. In 2018 IEEE International Conference on Big Data (Big Data), pp. 861–870. Cited by: §5.
  • S. Wang, S. Mazumder, B. Liu, M. Zhou, and Y. Chang (2018b) Target-sensitive memory networks for aspect sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 957–967. Cited by: §5.
  • W. Wang, S. J. Pan, D. Dahlmeier, and X. Xiao (2016a) Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 616–626. Cited by: §5.
  • W. Wang, S. J. Pan, D. Dahlmeier, and X. Xiao (2017b) Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §5.
  • Y. Wang, M. Huang, L. Zhao, et al. (2016b) Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pp. 606–615. Cited by: §5.
  • J. Weston, S. Chopra, and A. Bordes (2014) Memory networks. arXiv preprint arXiv:1410.3916. Cited by: §1, §5.
  • R. Xia, X. Hu, J. Lu, J. Yang, and C. Zong (2013) Instance selection and instance weighting for cross-domain sentiment classification via pu learning. In Twenty-Third International Joint Conference on Artificial Intelligence, Cited by: §5.
  • H. Xu, B. Liu, L. Shu, and P. S. Yu (2018) Double embeddings and CNN-based sequence labeling for aspect extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 592–598. External Links: Link, Document Cited by: §5.
  • H. Xu, B. Liu, L. Shu, and P. S. Yu (2019) BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Cited by: §4.1.1, Table 4, §5.
  • H. Xu, B. Liu, L. Shu, and P. S. Yu (2018) Lifelong domain word embedding via meta-learning. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 4510–4516. Cited by: §5.
  • H. Xu, B. Liu, L. Shu, and P. S. Yu (2019) Review conversational reading comprehension. arXiv preprint arXiv:1902.00821. Cited by: §5.