A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis

01/04/2021 ∙ by Yue Mao, et al. ∙ 0

Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Early works only focused on solving one of these subtasks individually. Some recent work focused on solving a combination of two subtasks, e.g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely. More recently, the triple extraction task has been proposed, i.e., extracting the (aspect term, opinion term, sentiment polarity) triples from a sentence. However, previous approaches fail to solve all subtasks in a unified end-to-end framework. In this paper, we propose a complete solution for ABSA. We construct two machine reading comprehension (MRC) problems, and solve all subtasks by joint training two BERT-MRC models with parameters sharing. We conduct experiments on these subtasks and results on several benchmark datasets demonstrate the effectiveness of our proposed framework, which significantly outperforms existing state-of-the-art methods.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Aspect based sentiment analysis (ABSA)111It is also referred as target based sentiment analysis (TBSA).

is an important research area in natural language processing. Consider the example in Figure

1, in the sentence “The ambience was nice, but the service was not so great.”, the aspect terms (AT) are “ambience/service” and the opinion terms (OT) are “nice/not so great”. Traditionally, there exist three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Recent research works aim to do a combination of two subtasks and have achieved great progress. For example, they extract (AT, OT) pairs, or extract ATs with corresponding sentiment polarities (SP). More recently, some work that aims to do all related subtasks in ABSA with a unified framework has raised increasing interests.

Figure 1: An illustrative example of ABSA substasks.

For convenience, we assume the following abbreviations of ABSA subtasks as illustrated in Figure 1:

  • AE: AT extraction

  • OE: OT extraction

  • SC: aspect-level sentiment classification

  • AESC222It is also referred as aspect based sentiment analysis (ABSA).: AT extraction and sentiment classification

  • AOE333It is also referred as target oriented opinion word extraction (TOWE).: aspect-oriented OT extraction

  • Pair: (AT, OT) pair extraction

  • Triple: (AT, OT, SP) triple extraction.

We mainly focus on the task of extracting triples since it is the hardest among all ABSA substasks. peng2020knowing proposed a unified framework to extract (AT, OT, SP) triples. However, it is computationally inefficient as its framework has two stages and has to train three separate models.

In this paper, we propose a joint training framework to handle all ABSA subtasks (described in Figure 1) in one single model. We use BERT devlin2019bert as our backbone network and use a span based model to detect the start/end positions of ATs/OTs from a sentence. Span based methods outperform traditional sequence tagging based methods for extraction tasks hu-etal-2019-open

. Following its idea, a heuristic multi-span decoding algorithm is used, which is based on the non-maximum suppression algorithm (NMS)


We convert the original triple extraction task to two machine reading comprehension (MRC) problems. MRC methods are known to be effective if a pre-trained BERT model is used. The reason might be that BERT is usually pre-trained with the next sentence prediction to capture the pairwise sentence relations. Theoretically, the triple extraction task can be decomposited to subtasks AE, AOE and SC . Thus, we use the left MRC to handle AE and the right MRC to handle AOE and SC. Our main contributions in this paper are as follows:

  • We show the triple extraction task can be jointly trained with three objectives.

  • We propose a dual-MRC framework that can handle all subtasks in ABSA (as illustrated in Table 1).

  • We conduct experiments to compare our proposed framework on these tasks. Experimental results show that our proposed method outperforms the state-of-the-art methods.

Subtasks Left-MRC Right-MRC
Extraction Classification Extraction
Table 1: Our proposed dual-MRC can handle all ABSA subtasks.

Related Work

Aspect-based sentiment analysis (ABSA) has been widely studied since it was first proposed in kddHuL04. In this section, we present existing works on ABSA according to related subtasks.

SC. Various neural models have been proposed for this task in recent years. The core idea of these works is to capture the intricate relationship between an aspect and its context by designing various neural architectures such as CNN HuangC18; LamLSB18, RNN tang-etal-2016-effective; ZhangZV16; RuderGB16, attention-based network MaLZW17; DuSWQLXL19; WangHZZ16; GuZHS18; YangTWXC17, memory networkTangQL16; ChenSBY17; FanGD0XW18. SunHQ19 convert SC to a BERT sentence-pair classification task, which achieves state-of-the-art results of this task.

AE. As the pre-task of SC, AE aims to identify all aspect terms in a sentence kddHuL04; pontiki-etal-2014-semeval and is usually regarded as a sequence labeling problem LiBLLY18; XuLSY18; HeLND17. Besides, MaLWXW19 and LiCQLS20 formulated AE as a sequence-to-sequence learning task and also achieved impressive results.

AESC. In order to make AESC meet the needs of practical use, plenty of previous works make efforts to solve AE and SC simultaneously. Simply merging AE and SC in a pipeline manner will lead to an error-propagation problem MaLW18. Some works li2019unified; LiBZL19 attempt to extract aspects and predicting corresponding sentiment polarities jointly through sequence tagging based on a unified tagging scheme. However, these approaches are inefficient due to the compositionality of candidate labels LeeKP016 and may suffer the sentiment inconsistency problem. ZhouHGHH19 and hu-etal-2019-open utilize span-based methods to conduct AE and SC at the span-level rather than token-level, which are able to overcome the sentiment inconsistency problem. It is worth noting that the information of opinion terms is under-exploited during these works.

OE. Opinion term extraction (OE) is widely employed as an auxiliary task to improve the performance of AE YuJX19; DBLP:conf/aaai/WangPDX17; PanW18, SC he-etal-2019-interactive or both of them chen-qian-2020-relation. However, the extracted ATs and OTs in these works are not in pairs, as a result, they can not provide the cause for corresponding polarity of an aspect.

AOE. The task AOE fan2019target has been proposed for the pair-wise aspect and opinion terms extraction in which the aspect terms are given in advance. fan2019target design an aspect-fused sequence tagging approach for this task. DBLP:conf/aaai/WuZDHC20

utilize a transfer learning method that leverages latent opinions knowledge from auxiliary datasets to boost the performance of


Pair. ZhaoHZLX20 proposed the Pair

task to extract aspect-opinion pairs from scratch, they develop a span-based multi-task framework, which first enumerates all the candidate spans and then construct two classifiers to identify the types of spans (i.e. aspect or opinion terms) and the relationship between spans.

Triple. peng2020knowing defined the triple extraction task for ABSA, which aims to extract all possible aspect terms as well as their corresponding opinion term and sentiment polarity. The method proposed in peng2020knowing is a two-stage framework, the first stage contains two separate modules, one is a unified sequence tagging model for AE and SC

, the other is a graph convolutional neural network(GCN) for

OE. In the second stage, all possible aspect-opinion pairs are enumerated and a binary classifier is constructed to judge whether the aspect term and opinion term match with each other. The main difference between our work and peng2020knowing is that we regard all subtasks as a question-answering problem, and propose a unified framework based on a single model.

Proposed Framework

Joint Training for Triple Extraction

In this section, we focus on the triple extraction task and the other subtasks can be regarded as special cases of it. Given a sentence with max-length as the input. Let be the output of annotated triples given the input sentence , where Positive, Neutral, Negative and refers to (aspect term, opinion term and sentiment polarity). For the training set , we want to maximize the likelihood




Consider the log-likelihood for ,

The last equation holds because the opinion terms and the sentiment polarity are conditionally independent given the sentence and the aspect term . 444Note has all the information needed to determine . The term does not bring additional information as it can be implied by , therefore

We sum above equation over and normalize the both sides, then we get the log-likelihood of the following form


where . The first term is repeated in order to match with the other two terms. From (5), we may conclude the triple extraction task Triple can be converted to the joint training of AE, SC and AOE.

Dual-MRC Framework

Figure 2: Proposed joint training dual-MRC framework.

Now we are going to propose our joint training dual-MRC framework. As illustrated in Figure 2, our model consists of two parts. Both parts use BERT devlin2019bert as their backbone models to encode the context information. Recall that BERT is a multi-layer bidirectional Transformer based language representation model. Let denote the sentence length and denote the hidden dimension. Suppose the last layer outputs for all tokens are which are used for extraction, where refer to the left/right part and refer to the stard/end token. Suppose the output of BERT at the [CLS] token is which is used for classification.

The goal of the left part is to extract all ATs from the given text, i.e., the task AE. As we discussed previously, span based methods are proven to be effective for extraction tasks. We follow the idea in hu-etal-2019-open

, for the left part, we obtain the logits and probabilities for start/end positions


where and are trainable weights and softmax is taken over all tokens. Define the extraction loss of the left part as


where and are ground truth start and end positions for ATs.

The goal of the right part is to extract all OTs and find the sentiment polarity with respect to a given specific AT. Similarly, we obtain the logits and probabilities for start/end positions


where and are trainable weights and softmax is applied on all tokens. Define the extraction loss of the right part as


where are true start and end positions for OTs given a specific AT.

In addition, for the right part, we also obtain the sentiment polarity


The cross entropy loss for the classification is


where is the true labels for sentiment polarities. Then we want to minimize the final joint training loss


where are hyper-parameters to control the contributions of objectives.

MRC Dataset Conversion

As illustrated in Figure 3, the original triple annotations have to be converted before it is fed into the joint training dual-MRC model. Both MRCs use the input sentence as their contexts. The left MRC is constructed with the query


Then the answer to the left MRC is all ATs from the text. Given an AT, the right MRC is constructed with the query

opinion terms for AT in the text.” (16)

The output to the right MRC is all OTs and the sentiment polarity with respect to the given AT. An important problem is that number of right MRCs equals the number of ATs, therefore, the left MRC is repeated for that number of times.

Figure 3: Dataset conversion

Inference Process

For Triple, we want to point out some differences between the training process and inference process. During the training process, the ground truth of all ATs are known, then the right MRC can be constructed based on these ATs. Thus, the training process is end-to-end. However, during the inference process, the ATs are the output of the left MRC. Therefore, we inference the two MRCs in a pipeline, as in Algorithm 1.

Input: sentence
Output: triples
Initialize ;
Input with the query described in (15) as the left MRC, and output the AT candidates ;
If , return ;
for  do
       Input with the query described in (16) as the right MRC, and output the sentiment polarity and OTs ;
end for
Return .
Algorithm 1 The inference Process for Triple Extraction of the Dual-MRC Framework

The inference process of other tasks are similar. The task AE uses the span output from the left MRC. AOE and SC use the span and classification outputs from the right MRC. AESC and Pair use a combination of them. Please refer to Table 1 for details.



Original datasets are from the Semeval Challengespontiki-etal-2014-semeval; pontiki-etal-2015-semeval; pontiki-etal-2016-semeval, where ATs and corresponding sentiment polarities are labeled. We evaluate our framework on three public datasets derived from them.

The first dataset is from DBLP:conf/aaai/WangPDX17, where labels for opinion terms are annotated. All datasets share a fixed training/test split. The second dataset is from fan2019target, where (AT, OT) pairs are labeled. The third dataset is from peng2020knowing where (AT, OT, SP) triples are labeled. A small number of samples with overlapping ATs and OTs are corrected. Also, of the data from the training set are randomly selected as the validation set. The detailed statistics for the three sets of datasets above are shown in Table 2, Table 3 and Table 4.

Dataset 14res 14lap 15res
#s #a #o #s #a #o #s #a #o
train 3044 3699 3484 3048 2373 2504 1315 1199 1210
test 800 1134 1008 800 654 674 685 542 510
Table 2: Dataset statistics annotated by DBLP:conf/aaai/WangPDX17. #s, #a and #o denote the numbers of sentences, ATs and OTs.
Dataset 14res 14lap 15res 16res
#s #a #s #a #s #a #s #a
train 1627 2643 1158 1634 754 1076 1079 1512
test 500 865 343 482 325 436 329 457
Table 3: Dataset statistics annotated by fan2019target. #s and #a denote the numbers of sentences and ATs.
Dataset 14res 14lap 15res 16res
#s #p #s #p #s #p #s #p
train 1300 2145 920 1265 593 923 842 1289
dev 323 524 228 337 148 238 210 316
test 496 862 339 490 318 455 320 465
Table 4: Dataset statistics annotated by peng2020knowing. #s and #p denote the numbers of sentences and (AT, OT) pairs.

Subtasks and Baselines

There exist three research lines in ABSA where each research line with different data annotations, ABSA substasks, baselines and experimental settings. To fairly compare our proposed framework with previous baselines, we should specify them clearly for each research line.

Using the dataset from DBLP:conf/aaai/WangPDX17, the following baselines were evaluated for AE, OE, SC and AESC:

  • SPAN-BERT hu-etal-2019-open is a pipeline method for AESC which takes BERT as the backbone network. A span boundary detection module is used for AE, then followed by a polarity classifier based on span representations for SC.

  • IMN-BERT he-etal-2019-interactive is an extension of IMN he-etal-2019-interactive with BERT as the backbone. IMN is a multi-task learning method involving joint training for AE and SC. A message-passing architecture is introduced in IMN to boost the performance of AESC.

  • RACL-BERT chen-qian-2020-relation is a stacked multi-layer network based on BERT encoder and is the state of the art method for AESC. A Relation propagation mechanism is utilized in RACL to capture the interactions between subtasks (i.e. AE, OE, SC).

Using the dataset from fan2019target, the following baselines were evaluated for AOE:

  • IOG fan2019target is the first model proposed to address AOE, which adopts six different BLSTMs to extract corresponding opinion terms for aspects given in advance.

  • LOTN DBLP:conf/aaai/WuZDHC20 is the state of the art method for AOE, which transfer latent opinion information from external sentiment classification datasets to improve the performance.

Using the dataset from peng2020knowing, the following baselines were evaluated for AESC, Pair and Triple:

  • RINANTE dai-song-2019-neural is a weakly supervised co-extraction method for AE and OE which make use of the dependency relations of words in a sentence.

  • CMLA DBLP:conf/aaai/WangPDX17 is a multilayer attention network for AE and OE

    , where each layer consists of a couple of attentions with tensor operators.

  • Li-unified-R peng2020knowing is a modified variant of Li-unifiedli2019unified, which is originally for AESC via a unified tagging scheme. Li-unified-R only adapts the original OE module for opinion term extraction.

  • Peng-two-stage peng2020knowing is a two-stage framework with separate models for different subtasks in ABSA and is the state-of-the-art method for Triple.

Model Settings

We use the BERT-Base-Uncased555https://github.com/google-research/bert or BERT-Large-Uncased as backbone models for our proposed model depending on the baselines. Please refer to devlin2019bert for model details of BERT. We use Adam optimizer with a learning rate of and warm up over the first

steps to train for 3 epochs. The batch size is

and a dropout probability of

is used. The hyperparameters

for the final joint training loss in Equation 14 are not sensitive to results, so we fix them as in our experiments. The logit thresholds of heuristic multi-span decoding algorithms hu-etal-2019-open are very sensitive to results and they are manually tuned on each dataset, and other hyperparameters are kept default. All experiments are conducted on a single Tesla-V100 GPU.

14res 14lap 15res
SPAN-BERT 86.71 - 71.75 73.68 82.34 - 62.50 61.25 74.63 - 50.28 62.29
IMN-BERT 84.06 85.10 75.67 70.72 77.55 81.00 75.56 61.73 69.90 73.29 70.10 60.22
RACL-BERT 86.38 87.18 81.61 75.42 81.79 79.72 73.91 63.40 73.99 76.00 74.91 66.05
Dual-MRC 86.60 - 82.04 75.95 82.51 - 75.97 65.94 75.08 - 73.59 65.08
Table 5: Results for AE, SC and AESC on the datasets annotated by DBLP:conf/aaai/WangPDX17. OE is not applicable to our proposed framework. All tasks are evaluated with F1. Baseline results are directly taken from chen-qian-2020-relation. Our model is based on BERT-Large-Uncased. of the data from the training set are randomly selected as the validation set. The results are the average scores of 5 runs with random initialization.
14res 14lap 15res 16res
P R F1 P R F1 P R F1 P R F1
IOG 82.38 78.25 80.23 73.43 68.74 70.99 72.19 71.76 71.91 84.36 79.08 81.60
LOTN 84.00 80.52 82.21 77.08 67.62 72.02 76.61 70.29 73.29 86.57 80.89 83.62
Dual-MRC 89.79 78.43 83.73 78.21 81.66 79.90 77.19 71.98 74.50 86.07 80.77 83.33
Table 6: Results for AOE on the datasets annotated by fan2019target. Baseline results are directly taken from DBLP:conf/aaai/WuZDHC20. Our model is based on BERT-Base-Uncased.
14res 14lap 15res 16res
P R F1 P R F1 P R F1 P R F1
AESC RINANTE 48.97 47.36 48.15 41.20 33.20 36.70 46.20 37.40 41.30 49.40 36.70 42.10
CMLA 67.80 73.69 70.62 54.70 59.20 56.90 49.90 58.00 53.60 58.90 63.60 61.20
Li-unified-R 73.15 74.44 73.79 66.28 60.71 63.38 64.95 64.95 64.95 66.33 74.55 70.20
Peng-two-stage 74.41 73.97 74.19 63.15 61.55 62.34 67.65 64.02 65.79 71.18 72.30 71.73
Dual-MRC 76.84 76.31 76.57 67.45 61.96 64.59 66.84 63.52 65.14 69.18 72.59 70.84
Pair RINANTE 42.32 51.08 46.29 34.40 26.20 29.70 37.10 33.90 35.40 35.70 27.00 30.70
CMLA 45.17 53.42 48.95 42.10 46.30 44.10 42.70 46.70 44.60 52.50 47.90 50.00
Li-unified-R 44.37 73.67 55.34 52.29 52.94 52.56 52.75 61.75 56.85 46.11 64.55 53.75
Peng-two-stage 47.76 68.10 56.10 50.00 58.47 53.85 49.22 65.70 56.23 52.35 70.50 60.04
Dual-MRC 76.23 73.67 74.93 65.43 61.43 63.37 72.43 58.90 64.97 77.06 74.41 75.71
Triple RINANTE 31.07 37.63 34.03 23.10 17.60 20.00 29.40 26.90 28.00 27.10 20.50 23.30
CMLA 40.11 46.63 43.12 31.40 34.60 32.90 34.40 37.60 35.90 43.60 39.80 41.60
Li-unified-R 41.44 68.79 51.68 42.25 42.78 42.47 43.34 50.73 46.69 38.19 53.47 44.51
Peng-two-stage 44.18 62.99 51.89 40.40 47.24 43.50 40.97 54.68 46.79 46.76 62.97 53.62
Dual-MRC 71.55 69.14 70.32 57.39 53.88 55.58 63.78 51.87 57.21 68.60 66.24 67.40
Table 7: Results for AESC, Pair and Triple on the datasets annotated by peng2020knowing. Baseline results are directly taken from peng2020knowing. Our model is based on BERT-Base-Uncased.

Evaluation Metrics

For all tasks in our experiments, we use the precision (P), recall (R), and F1 scores 666We use F1 as the metric for aspect-level sentiment classification following chen-qian-2020-relation

as evaluation metrics since a predicted term is correct if it exactly matches a gold term.

Example Ground Truth Our model Peng-two-stage Li-unified-R CMLA
Rice is too dry ,
tuna was n’t so
fresh either .
(Rice, too dry, NEG),
(tuna, was n’t so fresh, NEG)
(Rice, too dry, NEG),
(tuna, was n’t so fresh, NEG)
(Rice, too dry, NEG),
(tuna, was n’t so fresh, NEG),
(Rice, was n’t so fresh, NEG),
(tuna, too dry, NEG)
(Rice, dry, POS),
(Rice, n’t, POS),
(tuna, dry, POS),
(tuna, fresh, POS)
(Rice, dry, POS),
(tuna, dry, POS)
I am pleased with
the fast log on, speedy
WiFi connection and
the long battery life.
(log on, pleased, POS),
(log on, fast, POS),
(WiFi connection, speedy, POS),
(battery life, long, POS)
(log on, pleased, POS),
(log on, fast, POS),
(WiFi connection, speedy, POS),
(WiFi connection, pleased, POS),
(battery life, long, POS)
(log, pleased, POS),
(log, fast, POS),
(WiFi connection, speedy, POS),
(battery life, long, POS)
(WiFi connection, speedy, POS),
(battery life, long, POS)
(WiFi connection, speedy, POS),
(WiFi connection, long, POS),
(battery life, fast, POS),
(battery life, long, POS)
The service was exceptional
- sometime there was a feeling
that we were served by the
army of friendly waiters .
(service, exceptional, POS),
(waiters, friendly, POS)
(service, exceptional, POS),
(waiters, friendly, POS)
(service, exceptional, POS),
(waiters, friendly, POS)
(service, exceptional, POS),
(waiters, friendly, POS),
(service, feeling, POS)
(service, exceptional, POS),
(waiters, friendly, POS)
Table 8: Case study of task Triple. Wrong predictions are marked with . The three examples are extractly the same as the ones selected by peng2020knowing.

Main Results

As mentioned previously, there are three research lines with different datasets, ABSA substasks, baselines and experimental settings. For each research line, we keep the same dataset and experimental setting, and compare our proposed dual-MRC framework with the baselines and present our results in Table 5, Table 6 and Table 7.

First, we compare our proposed method for AE, SC and AESC on the dataset from DBLP:conf/aaai/WangPDX17. OE is not applicable to our proposed framework 777If needed, we can train a separate model with the query “Find the opinion terms in the text.” for OE.. Since the pair-wise relations of (AT, OT) are not annotated in this dataset, we use the right part of our model for classification only. of the data from the training set are randomly selected as the validation set. The results are the average scores of 5 runs with random initialization and they are shown in Table 5. We adopt BERT-Large-Uncased as our backbone model since the baselines use it too. All the baselines are BERT based and our results achieve the first or second place comparing to them. Recall that our approach is inspired by SPAN-BERT, which is a strong baseline for extraction tasks. Our results are close to SPAN-BERT in AE. However, with the help of MRC, we achieve much better results in SC and AESC.

Second, we compare our proposed method for AOE on the dataset from fan2019target, where the pair-wise (AT, OT) relations are annotated. This task can be viewed as a trivial case of our proposed full model. The results are shown in Table 6. BERT-Base-Uncased is used as our backbone model. Although the result for 16res is a little bit lower than LOTN, most of our results significantly outperform the previous baselines. It indicates our model has advantage in matching AT and OT. In particular, our model performs much better than baselines on lap14. It is probably due to the domain difference between the laptop (14lap) comments and the restaurant comments (14res/15res/16res).

Third, we compare our proposed method for AESC, Pair and Triple on the dataset from peng2020knowing. The full model of our proposed framework is implemented. The results are shown in Table 7. BERT-Base-Uncased is used as our backbone model. Our results significantly outperform the baselines, especially in the precision scores of extraction the pair-wise (AT, OT) relations. Note that Li-unified-R and Peng-two-stage both use the unified tagging schema. For extraction tasks, span based methods outperform the unified tagging schema for extracting terms, probably because determining the start/end positions is easier than determining the label for every token. More precisely, for the unified tagging schema, there are at 7 possible choices for each token, say B-POS, B-NEU, B-NEG, I-POS, I-NEU, I-NEG, O, so there are total choices. For span based methods, there are at 4 possible choices for each token, say IS-START, NOT-START, IS-END, NOT-END, then there are total choices. Our proposed method combines MRC and span based extraction, and it has huge improvements for Pair and Triple.

Analysis on Joint Learning

We give some analysis on the effectiveness of joint learning. The experimental results on the dataset from peng2020knowing are shown in Table 9. Overall, from the experimental results, adding one or two learning objectives does not affect much in F-1 scores. However, joint learning is more efficient and it can handle more tasks with one single model.

Task Left-MRC Right-MRC 14res 14lap 15res 16res
extraction classification extraction
AESC 76.31 63.95 65.43 69.48
76.57 0.26 64.59 0.64 65.14 0.29 70.84 1.36
Pair 76.33 65.26 65.21 76.61
74.93 1.40 63.37 1.89 64.97 0.24 75.71 0.90
AE 82.80 78.35 78.22 82.16
82.93 0.13 77.31 1.04 76.08 2.14 81.20 0.96
Table 9: Results on the analysis of joint learning for AESC and Pair on the dataset from peng2020knowing.

For the task AESC, we compare the results with or without the span based extraction output from the right part of our model. By jointly learning to extract the opinion terms for a given aspect, the result of aspect-level sentiment classification is improved a little bit. It makes sense because extracted OTs are useful for identifying the sentiment polarity of the given AT.

For the task Pair, we compare the results with or without the classification output from the right part of our model. The F-1 scores for OT extraction decrease a little bit when the sentiment classification objective is added. The reason might be that the sentiment polarity can point to multiple OTs in a sentence where some OTs are not paired with the given AT.

Case Study

To validate the effectiveness of our model, we compare our method based on exactly the same three examples in the baseline peng2020knowing as its source code is not public. The results are shown in Table 8.

The first example shows our MRC based approach performs better in matching AT and OT. Peng’s approach matches “tuna” and “too dry” by mistake while our approach converts the matching problem to a MRC problem. The second example shows the span based extraction method is good at detecting boundaries of entities. Our approach successfully detects “log on” while Peng’s approach detects “log” by mistake. Moreover, the sentiment classification result indicates that our MRC based approach is also good at SC.

We plot in Figure 4 the attention matrices from our fine-tuned model between the input text and the query. As we can see, the “opinion term” has high attention scores with “fresh”, and “sentiment” has high attention scores with “food/fresh/hot”. As a result, the queries can capture important information for the task via self-attentions.

Figure 4: An example of attention matrices for the input text and query.


In this paper, we propose a joint training dual-MRC framework to handle all ABSA subtasks of aspect based sentiment analysis (ABSA) in one shot, where the left MRC is for aspect term extraction and the right MRC is for aspect-oriented opinion term extraction and sentiment classification. The original dataset is converted and fed into dual-MRC to train jointly. For three research lines, experiments are conducted and are compared with different ABSA subtasks and baselines. Experimental results indicate that our proposed framework outperforms all compared baselines.