Aspect based sentiment analysis (ABSA)111It is also referred as target based sentiment analysis (TBSA).
is an important research area in natural language processing. Consider the example in Figure1, in the sentence “The ambience was nice, but the service was not so great.”, the aspect terms (AT) are “ambience/service” and the opinion terms (OT) are “nice/not so great”. Traditionally, there exist three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Recent research works aim to do a combination of two subtasks and have achieved great progress. For example, they extract (AT, OT) pairs, or extract ATs with corresponding sentiment polarities (SP). More recently, some work that aims to do all related subtasks in ABSA with a unified framework has raised increasing interests.
For convenience, we assume the following abbreviations of ABSA subtasks as illustrated in Figure 1:
AE: AT extraction
OE: OT extraction
SC: aspect-level sentiment classification
AESC222It is also referred as aspect based sentiment analysis (ABSA).: AT extraction and sentiment classification
AOE333It is also referred as target oriented opinion word extraction (TOWE).: aspect-oriented OT extraction
Pair: (AT, OT) pair extraction
Triple: (AT, OT, SP) triple extraction.
We mainly focus on the task of extracting triples since it is the hardest among all ABSA substasks. peng2020knowing proposed a unified framework to extract (AT, OT, SP) triples. However, it is computationally inefficient as its framework has two stages and has to train three separate models.
In this paper, we propose a joint training framework to handle all ABSA subtasks (described in Figure 1) in one single model. We use BERT devlin2019bert as our backbone network and use a span based model to detect the start/end positions of ATs/OTs from a sentence. Span based methods outperform traditional sequence tagging based methods for extraction tasks hu-etal-2019-open
. Following its idea, a heuristic multi-span decoding algorithm is used, which is based on the non-maximum suppression algorithm (NMS)Rosenfeld:1971:ECD.
We convert the original triple extraction task to two machine reading comprehension (MRC) problems. MRC methods are known to be effective if a pre-trained BERT model is used. The reason might be that BERT is usually pre-trained with the next sentence prediction to capture the pairwise sentence relations. Theoretically, the triple extraction task can be decomposited to subtasks AE, AOE and SC . Thus, we use the left MRC to handle AE and the right MRC to handle AOE and SC. Our main contributions in this paper are as follows:
We show the triple extraction task can be jointly trained with three objectives.
We propose a dual-MRC framework that can handle all subtasks in ABSA (as illustrated in Table 1).
We conduct experiments to compare our proposed framework on these tasks. Experimental results show that our proposed method outperforms the state-of-the-art methods.
Aspect-based sentiment analysis (ABSA) has been widely studied since it was first proposed in kddHuL04. In this section, we present existing works on ABSA according to related subtasks.
SC. Various neural models have been proposed for this task in recent years. The core idea of these works is to capture the intricate relationship between an aspect and its context by designing various neural architectures such as CNN HuangC18; LamLSB18, RNN tang-etal-2016-effective; ZhangZV16; RuderGB16, attention-based network MaLZW17; DuSWQLXL19; WangHZZ16; GuZHS18; YangTWXC17, memory networkTangQL16; ChenSBY17; FanGD0XW18. SunHQ19 convert SC to a BERT sentence-pair classification task, which achieves state-of-the-art results of this task.
AE. As the pre-task of SC, AE aims to identify all aspect terms in a sentence kddHuL04; pontiki-etal-2014-semeval and is usually regarded as a sequence labeling problem LiBLLY18; XuLSY18; HeLND17. Besides, MaLWXW19 and LiCQLS20 formulated AE as a sequence-to-sequence learning task and also achieved impressive results.
AESC. In order to make AESC meet the needs of practical use, plenty of previous works make efforts to solve AE and SC simultaneously. Simply merging AE and SC in a pipeline manner will lead to an error-propagation problem MaLW18. Some works li2019unified; LiBZL19 attempt to extract aspects and predicting corresponding sentiment polarities jointly through sequence tagging based on a unified tagging scheme. However, these approaches are inefficient due to the compositionality of candidate labels LeeKP016 and may suffer the sentiment inconsistency problem. ZhouHGHH19 and hu-etal-2019-open utilize span-based methods to conduct AE and SC at the span-level rather than token-level, which are able to overcome the sentiment inconsistency problem. It is worth noting that the information of opinion terms is under-exploited during these works.
OE. Opinion term extraction (OE) is widely employed as an auxiliary task to improve the performance of AE YuJX19; DBLP:conf/aaai/WangPDX17; PanW18, SC he-etal-2019-interactive or both of them chen-qian-2020-relation. However, the extracted ATs and OTs in these works are not in pairs, as a result, they can not provide the cause for corresponding polarity of an aspect.
AOE. The task AOE fan2019target has been proposed for the pair-wise aspect and opinion terms extraction in which the aspect terms are given in advance. fan2019target design an aspect-fused sequence tagging approach for this task. DBLP:conf/aaai/WuZDHC20
utilize a transfer learning method that leverages latent opinions knowledge from auxiliary datasets to boost the performance ofAOE.
Pair. ZhaoHZLX20 proposed the Pair
task to extract aspect-opinion pairs from scratch, they develop a span-based multi-task framework, which first enumerates all the candidate spans and then construct two classifiers to identify the types of spans (i.e. aspect or opinion terms) and the relationship between spans.
Triple. peng2020knowing defined the triple extraction task for ABSA, which aims to extract all possible aspect terms as well as their corresponding opinion term and sentiment polarity. The method proposed in peng2020knowing is a two-stage framework, the first stage contains two separate modules, one is a unified sequence tagging model for AE and SC
, the other is a graph convolutional neural network(GCN) forOE. In the second stage, all possible aspect-opinion pairs are enumerated and a binary classifier is constructed to judge whether the aspect term and opinion term match with each other. The main difference between our work and peng2020knowing is that we regard all subtasks as a question-answering problem, and propose a unified framework based on a single model.
Joint Training for Triple Extraction
In this section, we focus on the triple extraction task and the other subtasks can be regarded as special cases of it. Given a sentence with max-length as the input. Let be the output of annotated triples given the input sentence , where Positive, Neutral, Negative and refers to (aspect term, opinion term and sentiment polarity). For the training set , we want to maximize the likelihood
Consider the log-likelihood for ,
The last equation holds because the opinion terms and the sentiment polarity are conditionally independent given the sentence and the aspect term . 444Note has all the information needed to determine . The term does not bring additional information as it can be implied by , therefore
We sum above equation over and normalize the both sides, then we get the log-likelihood of the following form
where . The first term is repeated in order to match with the other two terms. From (5), we may conclude the triple extraction task Triple can be converted to the joint training of AE, SC and AOE.
Now we are going to propose our joint training dual-MRC framework. As illustrated in Figure 2, our model consists of two parts. Both parts use BERT devlin2019bert as their backbone models to encode the context information. Recall that BERT is a multi-layer bidirectional Transformer based language representation model. Let denote the sentence length and denote the hidden dimension. Suppose the last layer outputs for all tokens are which are used for extraction, where refer to the left/right part and refer to the stard/end token. Suppose the output of BERT at the [CLS] token is which is used for classification.
The goal of the left part is to extract all ATs from the given text, i.e., the task AE. As we discussed previously, span based methods are proven to be effective for extraction tasks. We follow the idea in hu-etal-2019-open
where and are trainable weights and softmax is taken over all tokens. Define the extraction loss of the left part as
where and are ground truth start and end positions for ATs.
The goal of the right part is to extract all OTs and find the sentiment polarity with respect to a given specific AT. Similarly, we obtain the logits and probabilities for start/end positions
where and are trainable weights and softmax is applied on all tokens. Define the extraction loss of the right part as
where are true start and end positions for OTs given a specific AT.
In addition, for the right part, we also obtain the sentiment polarity
The cross entropy loss for the classification is
where is the true labels for sentiment polarities. Then we want to minimize the final joint training loss
where are hyper-parameters to control the contributions of objectives.
MRC Dataset Conversion
As illustrated in Figure 3, the original triple annotations have to be converted before it is fed into the joint training dual-MRC model. Both MRCs use the input sentence as their contexts. The left MRC is constructed with the query
Then the answer to the left MRC is all ATs from the text. Given an AT, the right MRC is constructed with the query
|opinion terms for AT in the text.”||(16)|
The output to the right MRC is all OTs and the sentiment polarity with respect to the given AT. An important problem is that number of right MRCs equals the number of ATs, therefore, the left MRC is repeated for that number of times.
For Triple, we want to point out some differences between the training process and inference process. During the training process, the ground truth of all ATs are known, then the right MRC can be constructed based on these ATs. Thus, the training process is end-to-end. However, during the inference process, the ATs are the output of the left MRC. Therefore, we inference the two MRCs in a pipeline, as in Algorithm 1.
The inference process of other tasks are similar. The task AE uses the span output from the left MRC. AOE and SC use the span and classification outputs from the right MRC. AESC and Pair use a combination of them. Please refer to Table 1 for details.
Original datasets are from the Semeval Challengespontiki-etal-2014-semeval; pontiki-etal-2015-semeval; pontiki-etal-2016-semeval, where ATs and corresponding sentiment polarities are labeled. We evaluate our framework on three public datasets derived from them.
The first dataset is from DBLP:conf/aaai/WangPDX17, where labels for opinion terms are annotated. All datasets share a fixed training/test split. The second dataset is from fan2019target, where (AT, OT) pairs are labeled. The third dataset is from peng2020knowing where (AT, OT, SP) triples are labeled. A small number of samples with overlapping ATs and OTs are corrected. Also, of the data from the training set are randomly selected as the validation set. The detailed statistics for the three sets of datasets above are shown in Table 2, Table 3 and Table 4.
Subtasks and Baselines
There exist three research lines in ABSA where each research line with different data annotations, ABSA substasks, baselines and experimental settings. To fairly compare our proposed framework with previous baselines, we should specify them clearly for each research line.
Using the dataset from DBLP:conf/aaai/WangPDX17, the following baselines were evaluated for AE, OE, SC and AESC:
SPAN-BERT hu-etal-2019-open is a pipeline method for AESC which takes BERT as the backbone network. A span boundary detection module is used for AE, then followed by a polarity classifier based on span representations for SC.
IMN-BERT he-etal-2019-interactive is an extension of IMN he-etal-2019-interactive with BERT as the backbone. IMN is a multi-task learning method involving joint training for AE and SC. A message-passing architecture is introduced in IMN to boost the performance of AESC.
RACL-BERT chen-qian-2020-relation is a stacked multi-layer network based on BERT encoder and is the state of the art method for AESC. A Relation propagation mechanism is utilized in RACL to capture the interactions between subtasks (i.e. AE, OE, SC).
Using the dataset from fan2019target, the following baselines were evaluated for AOE:
IOG fan2019target is the first model proposed to address AOE, which adopts six different BLSTMs to extract corresponding opinion terms for aspects given in advance.
LOTN DBLP:conf/aaai/WuZDHC20 is the state of the art method for AOE, which transfer latent opinion information from external sentiment classification datasets to improve the performance.
Using the dataset from peng2020knowing, the following baselines were evaluated for AESC, Pair and Triple:
RINANTE dai-song-2019-neural is a weakly supervised co-extraction method for AE and OE which make use of the dependency relations of words in a sentence.
CMLA DBLP:conf/aaai/WangPDX17 is a multilayer attention network for AE and OE
, where each layer consists of a couple of attentions with tensor operators.
Li-unified-R peng2020knowing is a modified variant of Li-unifiedli2019unified, which is originally for AESC via a unified tagging scheme. Li-unified-R only adapts the original OE module for opinion term extraction.
Peng-two-stage peng2020knowing is a two-stage framework with separate models for different subtasks in ABSA and is the state-of-the-art method for Triple.
We use the BERT-Base-Uncased555https://github.com/google-research/bert or BERT-Large-Uncased as backbone models for our proposed model depending on the baselines. Please refer to devlin2019bert for model details of BERT. We use Adam optimizer with a learning rate of and warm up over the first
steps to train for 3 epochs. The batch size isand a dropout probability of
is used. The hyperparametersfor the final joint training loss in Equation 14 are not sensitive to results, so we fix them as in our experiments. The logit thresholds of heuristic multi-span decoding algorithms hu-etal-2019-open are very sensitive to results and they are manually tuned on each dataset, and other hyperparameters are kept default. All experiments are conducted on a single Tesla-V100 GPU.
For all tasks in our experiments, we use the precision (P), recall (R), and F1 scores 666We use F1 as the metric for aspect-level sentiment classification following chen-qian-2020-relation
as evaluation metrics since a predicted term is correct if it exactly matches a gold term.
|Example||Ground Truth||Our model||Peng-two-stage||Li-unified-R||CMLA|
As mentioned previously, there are three research lines with different datasets, ABSA substasks, baselines and experimental settings. For each research line, we keep the same dataset and experimental setting, and compare our proposed dual-MRC framework with the baselines and present our results in Table 5, Table 6 and Table 7.
First, we compare our proposed method for AE, SC and AESC on the dataset from DBLP:conf/aaai/WangPDX17. OE is not applicable to our proposed framework 777If needed, we can train a separate model with the query “Find the opinion terms in the text.” for OE.. Since the pair-wise relations of (AT, OT) are not annotated in this dataset, we use the right part of our model for classification only. of the data from the training set are randomly selected as the validation set. The results are the average scores of 5 runs with random initialization and they are shown in Table 5. We adopt BERT-Large-Uncased as our backbone model since the baselines use it too. All the baselines are BERT based and our results achieve the first or second place comparing to them. Recall that our approach is inspired by SPAN-BERT, which is a strong baseline for extraction tasks. Our results are close to SPAN-BERT in AE. However, with the help of MRC, we achieve much better results in SC and AESC.
Second, we compare our proposed method for AOE on the dataset from fan2019target, where the pair-wise (AT, OT) relations are annotated. This task can be viewed as a trivial case of our proposed full model. The results are shown in Table 6. BERT-Base-Uncased is used as our backbone model. Although the result for 16res is a little bit lower than LOTN, most of our results significantly outperform the previous baselines. It indicates our model has advantage in matching AT and OT. In particular, our model performs much better than baselines on lap14. It is probably due to the domain difference between the laptop (14lap) comments and the restaurant comments (14res/15res/16res).
Third, we compare our proposed method for AESC, Pair and Triple on the dataset from peng2020knowing. The full model of our proposed framework is implemented. The results are shown in Table 7. BERT-Base-Uncased is used as our backbone model. Our results significantly outperform the baselines, especially in the precision scores of extraction the pair-wise (AT, OT) relations. Note that Li-unified-R and Peng-two-stage both use the unified tagging schema. For extraction tasks, span based methods outperform the unified tagging schema for extracting terms, probably because determining the start/end positions is easier than determining the label for every token. More precisely, for the unified tagging schema, there are at 7 possible choices for each token, say B-POS, B-NEU, B-NEG, I-POS, I-NEU, I-NEG, O, so there are total choices. For span based methods, there are at 4 possible choices for each token, say IS-START, NOT-START, IS-END, NOT-END, then there are total choices. Our proposed method combines MRC and span based extraction, and it has huge improvements for Pair and Triple.
Analysis on Joint Learning
We give some analysis on the effectiveness of joint learning. The experimental results on the dataset from peng2020knowing are shown in Table 9. Overall, from the experimental results, adding one or two learning objectives does not affect much in F-1 scores. However, joint learning is more efficient and it can handle more tasks with one single model.
|76.57 0.26||64.59 0.64||65.14 0.29||70.84 1.36|
|74.93 1.40||63.37 1.89||64.97 0.24||75.71 0.90|
|82.93 0.13||77.31 1.04||76.08 2.14||81.20 0.96|
For the task AESC, we compare the results with or without the span based extraction output from the right part of our model. By jointly learning to extract the opinion terms for a given aspect, the result of aspect-level sentiment classification is improved a little bit. It makes sense because extracted OTs are useful for identifying the sentiment polarity of the given AT.
For the task Pair, we compare the results with or without the classification output from the right part of our model. The F-1 scores for OT extraction decrease a little bit when the sentiment classification objective is added. The reason might be that the sentiment polarity can point to multiple OTs in a sentence where some OTs are not paired with the given AT.
To validate the effectiveness of our model, we compare our method based on exactly the same three examples in the baseline peng2020knowing as its source code is not public. The results are shown in Table 8.
The first example shows our MRC based approach performs better in matching AT and OT. Peng’s approach matches “tuna” and “too dry” by mistake while our approach converts the matching problem to a MRC problem. The second example shows the span based extraction method is good at detecting boundaries of entities. Our approach successfully detects “log on” while Peng’s approach detects “log” by mistake. Moreover, the sentiment classification result indicates that our MRC based approach is also good at SC.
We plot in Figure 4 the attention matrices from our fine-tuned model between the input text and the query. As we can see, the “opinion term” has high attention scores with “fresh”, and “sentiment” has high attention scores with “food/fresh/hot”. As a result, the queries can capture important information for the task via self-attentions.
In this paper, we propose a joint training dual-MRC framework to handle all ABSA subtasks of aspect based sentiment analysis (ABSA) in one shot, where the left MRC is for aspect term extraction and the right MRC is for aspect-oriented opinion term extraction and sentiment classification. The original dataset is converted and fed into dual-MRC to train jointly. For three research lines, experiments are conducted and are compared with different ABSA subtasks and baselines. Experimental results indicate that our proposed framework outperforms all compared baselines.