This repository contains the source codes for the paper: "Aspect Sentiment Triplet Extraction using Reinforcement Learning" published at CIKM 2021.
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting triplets of aspect terms, their associated sentiments, and the opinion terms that provide evidence for the expressed sentiments. Previous approaches to ASTE usually simultaneously extract all three components or first identify the aspect and opinion terms, then pair them up to predict their sentiment polarities. In this work, we present a novel paradigm, ASTE-RL, by regarding the aspect and opinion terms as arguments of the expressed sentiment in a hierarchical reinforcement learning (RL) framework. We first focus on sentiments expressed in a sentence, then identify the target aspect and opinion terms for that sentiment. This takes into account the mutual interactions among the triplet's components while improving exploration and sample efficiency. Furthermore, this hierarchical RLsetup enables us to deal with multiple and overlapping triplets. In our experiments, we evaluate our model on existing datasets from laptop and restaurant domains and show that it achieves state-of-the-art performance. The implementation of this work is publicly available at https://github.com/declare-lab/ASTE-RL.READ FULL TEXT VIEW PDF
The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches a...
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from...
Aspect Sentiment Triplet Extraction (ASTE) deals with extracting opinion...
Aspect-oriented Fine-grained Opinion Extraction (AFOE) aims at extractin...
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the...
Aspect sentiment triplet extraction (ASTE), which aims to identify aspec...
Reproducing experiments is an important instrument to validate previous ...
This repository contains the source codes for the paper: "Aspect Sentiment Triplet Extraction using Reinforcement Learning" published at CIKM 2021.
Existing methods, such as, CMLA+ (Wang et al., 2017), RINANTE+ (Dai and Song, 2019), Li-unified-R (Li et al., 2019a), WhatHowWhy (Peng et al., 2020), OTE-MTL (Zhang et al., 2020), GTS (Wu et al., 2020), JET (Xu et al., 2020), TOP (Huang et al., 2021) and BMRC (Chen et al., 2021) are mainly divided into simultaneous and sequential methods. Early works (Wang et al., 2017; Dai and Song, 2019; Li et al., 2019a; Peng et al., 2020; Zhang et al., 2020; Wu et al., 2020) usually employ a two-staged approach where they simultaneously extract aspect terms with sentiments and opinion terms. These triplets are subsequently decoded through triplet classification or pairwise matching. Recent works (Huang et al., 2021; Chen et al., 2021) have shifted towards a more multi-stage, restrictive and sequential process during the extraction stage that can potentially capture more mutual dependencies and correlations among the triplet’s components while forgoing the triplet decoding stage.
In this work, we tackle the ASTE task using a novel paradigm ASTE-RL where we consider the aspect and opinion terms as arguments of the sentiments expressed in a sentence. Previous approaches usually simultaneously extract all three components or first identify the aspect and opinion terms, then pair them up to predict their sentiment polarities. Unlike previous approaches, we propose a hierarchical reinforcement learning (RL) framework (Takanobu et al., 2019; Duan et al., 2020) where we first consider the sentiment polarities, then identify their associated opinion and aspect terms using separate RL processes. This process is repeated to extract all triplets present in a sentence. With this hierarchical RL setup, the model handle multiple triplets and overlapping triplets, and model interactions between the three components effectively. Inspired by the recent success of the multi-turn machine reading comprehension (MRC) framework (Li et al., 2019b; Chen et al., 2021), we incorporate ideas to further improve mutual interactions.
We divide our framework ASTE-RL into three components: 1) aspect-oriented sentiment classification, 2) opinion term extraction and 3) aspect term extraction. For the sentiment classification component, the sentiment is expressed towards the aspect term, and has four possible labels: . Our opinion and aspect extraction components are sequence labeling models with a BIO tagging scheme (Ramshaw and Marcus, 1999). With this BIO scheme, we have three different labels to tag an input sequence for the opinion/aspect terms: . For a given sentence with tokens, ASTE-RL aims to output a set of labels where is the number of labels, represents the tagging labels for the opinion term in a predicted triplet, represents the tagging labels for the aspect term, and represents the sentiment polarity.
The three components are structured in a two-level hierarchy (Takanobu et al., 2019). In the higher level, we have the sentiment indicator. During the sequential scan of a sentence, an agent will decide at each position in a sentence at the token if it has gathered sufficient information to mark the position as indicative of a sentiment that is expressed towards an aspect term. If not, the agent will mark it as . Otherwise, it will mark it as either or
. In the latter case, the agent launches two subtasks in the lower level for the opinion and aspect extractions to identify the terms as arguments of the sentiment and engages in sequence labeling. Upon completion, the agent will return to the high-level sentiment indication process and continue the sequential scan of the sentence. This process is well-suited to be formulated as a semi-Markov decision process(Sutton et al., 1999b): 1) a high-level RL process that detects a sentiment indicator in a sentence; 2) two low-level RL processes that identify the opinion and aspect terms separately for the corresponding sentiment.
The high-level RL policy aims to detect the aspect-oriented sentiments in a sentence. This can be seen as a RL policy over options, where options are high-level actions (Sutton et al., 1999b).
Option: The option is selected from where indicates no sentiment indicated towards any aspect term.
State: The state at each time step is represented by: 1) the current hidden state , 2) the current part-of-speech (POS) tag
, 3) the sentiment polarity vector, and 4) the high-level state for the previous time step . To obtain for each token in a sentence, we pass the sentence into the spaCy (https://spacy.io/) POS tagger. The sentiment polarity vector is the embedding of the latest option where . Both the POS tag and sentiment embeddings are learned parameters in the model. Hence, the state is formally represented by:
where is a non-linear function implemented by a MLP. The hidden state is obtained from a pre-trained BERT model (Devlin et al., 2018) with Whole Word Masking and fine-tuned on the SQuAD v1.1 training set (Rajpurkar et al., 2016). Specifically, we first combine the query ”Which tokens indicate sentiments relating pairs of aspect spans and opinion spans?” and the review sentence into the BERT tokenizer to get a final input . We then pass this input into the BERT model, and represents the output vector from the BERT model that corresponds to the token . The initial state is initialized as: .
Policy: The stochastic policy for sentiment detection
specifies a probability distribution over the options:
Reward: At every time step, when is executed, the intermediate reward provided by the environment follows this:
If a sentiment that is expressed towards an aspect term is detected at a time step (i.e. ), the agent will launch two subtasks as low-level RL processes. When the subtasks are completed, the agent will return to the high-level RL process. Otherwise, the agent continues its sequential scan of until the last option about the last word of is sampled. When all options are sampled (i.e. at the end of the combined hierarchical RL process), there is a final reward for the high-level process: where.
Every time the high-level policy detects an aspect-oriented sentiment, two low-level policies will extract the corresponding opinion and aspect terms respectively and separately for the sentiment. In this subsection, we will generalize the RL elements such that they apply for both low-level RL processes, unless otherwise stated.
Action: The action at every time step is to assign a tag to the current word. The action is selected from , following a BIO tagging scheme. The symbols represent the beginning and inside of an opinion/aspect term respectively, while the symbol represents the unmarked label.
State: Similar to the high-level policy, the state at each time step is represented by: 1) the current hidden state , 2) the current POS tag , 3) the opinion/aspect tag vector , 4) the low-level state for the previous time step . To enhance the interactions between the sentiment and its associated opinion/aspect terms, we add a context vector to the state at each time step , using the sentiment state representation assigned to the latest option :
We also add the output vector from the BERT model for the [CLS] token, while computing the output vectors for the hidden states. Hence, the state is formally represented by:
Note that the representations used to compute the first low-level states for the opinion and aspect extractions are different. The representation used to compute the first low-level state for opinion term extraction is initialized using as: . The representation used to compute the first low-level state for aspect term extraction is initialized using as: . These initializations help us capture interactions between the triplet’s components. is a non-linear function implemented by a MLP, while and are linear functions that are implemented by a single linear layer. The hidden state is obtained in the same way as in the high-level RL process. However, the queries are changed. The query is ”What is the opinion span for the sentiment indicated at ?” for opinion term extraction and ”What is the aspect span for the sentiment indicated at ?” for aspect term extraction.
Policy: The stochastic policy for opinion/aspect extraction
specifies a probability distribution over the actions given the low-level stateand the high-level option that launches the current subtask:
Reward: At every time step, when is executed, the intermediate reward is computed as the prediction error over the gold labels:
where and depends on the aspect/opinion tag type. This enables the model to learn a policy that emphasizes the prediction of B and I tags and avoids only predicting O tags in a trivial manner. When all actions are sampled, there is a final reward for the low-level processes, represented by:
There will also be negative rewards in the cases where the low-level processes produce impossible predictions, namely cases where there are no or more than one tag present, and no or more than one opinion/aspect term identified for each predicted triplet. Note that the low-level rewards are non-zero only in the case where the option from the high-level process is correctly predicted.
We learn the high-level policy by maximizing the expected total reward at each time step as the agent samples trajectories following the high-level policies . Likewise, we learn the low-level policies by maximizing the expected total reward at each time step as the agent samples trajectories following the low-level policies . We then optimize all policies using policy gradient methods (Sutton et al., 1999a) with the REINFORCE algorithm (Williams, 1992; Takanobu et al., 2019).
We pre-train our ASTE-RL models for 40 epochs with a learning rate of 2e-5. During pre-training, we give our model the ground-truth options or actions at every time step to limit the exploration of the agent due to the high-dimensional state space in our setup. This prevents the agent from exploring too many unreasonable cases, e.g. an I tag preceding a B tag, and learning too slowly. We then fine-tune the best model (chosen based on the Devscore) with RL policy for 15 epochs with a learning rate of 5e-6. We sample 5 trajectories for each data point during RL fine-tuning.
We initialize the BERT parameters from pre-trained weights (Devlin et al., 2018) and update them during training for this task. We set the dimension of sentiment polarity and opinion/aspect tag embeddings at 300. For POS embeddings, we set the dimension at 25. We randomly initialize these embeddings and update them during training. We set the state vector dimension for and at 300. We apply dropout (Srivastava et al., 2014) after the non-linear activations in and during training and set the dropout rate at 0.5. We train our models in mini-batches of size 16 and optimize the model parameters using the Adam optimizer (Kingma and Ba, 2014).
We use the ASTE-Data-V2 dataset111https://github.com/xuuuluuu/SemEval-Triplet-data curated by Xu et al. (2020) to show the effectiveness of ASTE-RL in two different domains of English reviews, namely the laptop and restaurant domains. 14Rest, 15Rest, 16Rest are the datasets of the restaurant domain and 14Lap is of the laptop domain. We include the statistics of the four datasets in ASTE-Data-V2 in Table 2, where #sentence represents the number of sentences, and #positive, #negative, and #neutral represent the numbers of triplets with positive, negative, and neutral sentiment polarities respectively.
We process the sentences with BERT’s WordPiece tokenizer (Wu et al., 2016) to make them work for ASTE-RL. Since the WordPiece tokenization may break down the tokens in the original dataset into subwords, we need to align the opinion/aspect term annotations and our BIO tagging scheme. We tag every token that corresponds to the opinion/aspect term tokens in the original annotations with , except for the first token, which we tag with .
We follow the evaluation metrics ofXu et al. (2020) for our experiments. An extracted triplet is correct if the entire aspect term, opinion term, and sentiment polarity match with a ground-truth triplet. We report precision, recall and score based on this.
|WhatHowWhy (Peng et al., 2020)||-||37.38||50.38||42.87||-||43.24||63.66||51.46||-||48.07||57.51||52.32||-||46.96||64.24||54.21|
|OTE-MTL (Zhang et al., 2020)||-||54.26||41.07||46.75||-||63.07||58.25||60.56||-||60.88||42.68||50.18||-||65.65||54.28||59.42|
|(Wu et al., 2020)||-||58.02||40.11||47.43||-||71.41||53.00||60.84||-||64.57||44.33||52.57||-||70.17||55.95||62.26|
|(Xu et al., 2020)||48.26||54.84||34.44||42.31||53.14||66.76||49.09||56.58||55.06||59.77||42.27||49.52||58.45||63.59||50.97||56.59|
|(Xu et al., 2020)||45.83||55.98||35.36||43.34||53.54||61.50||55.13||58.14||60.97||64.37||44.33||52.50||60.90||70.94||57.00||63.21|
|(Wu et al., 2020)||-||57.12||53.42||55.21||-||71.76||59.09||64.81||-||54.71||55.05||54.88||-||65.89||66.27||66.08|
|(Xu et al., 2020)||50.40||53.53||43.28||47.86||56.00||63.44||54.12||58.41||59.86||68.20||42.89||52.66||60.67||65.28||51.95||57.85|
|(Xu et al., 2020)||48.84||55.39||47.33||51.04||56.89||70.56||55.94||62.40||64.78||64.45||51.96||57.53||63.75||70.42||58.37||63.83|
|TOP (Huang et al., 2021)||-||57.84||59.33||58.58||-||63.59||73.44||68.16||-||54.53||63.30||58.59||-||63.57||71.98||67.52|
|BMRC (Chen et al., 2021)||56.08||65.91||52.15||58.18||62.83||72.17||65.43||68.64||72.47||62.48||55.55||58.79||70.91||69.87||65.68||67.35|
|- Pre-training only||57.35||62.00||55.84||58.73||64.50||69.70||69.23||69.47||72.84||63.31||61.61||62.44||71.50||64.76||70.74||67.57|
We compare the performance of ASTE-RL against the following baselines: (i) WhatHowWhy: Peng et al. (2020) proposed a multi-layer LSTM neural architecture for co-extraction of aspect terms with sentiments, and opinion terms, with a Graph Convolutional Network (Kipf and Welling, 2016) component to capture dependency information to enhance the co-extraction. (ii) OTE-MTL: Zhang et al. (2020) proposed a multi-task learning framework to jointly extract aspect and opinion terms while parsing word-level sentiment dependencies, before conducting a triplet decoding process. We use results from Huang et al. (2021) for OTE-MTL’s performance on ASTE-Data-V2. (iii) GTS: Wu et al. (2020) proposed an end-to-end grid tagging framework and a grid inference strategy to exploit mutual indication between opinion factors. We use results from Huang et al. (2021) for GTS’ performance on ASTE-Data-V2, and report them for two variants: bidirectional LSTM (BiLSTM) and BERT. (iv) JET: Xu et al. (2020) proposed a position-aware tagging scheme for triplet extraction. They encode information about sentiment polarities and distances between the start position of aspect term and the opinion term’s start and end positions () or vice versa (). We report the results for two variants: BiLSTM and BERT. (v) TOP: Huang et al. (2021) proposed a two-stage method to enhance correlations between aspect and opinion terms. Aspect and opinion terms are first extracted with sequence labeling, and artificial tags are added to each pair to establish correlation. A sentiment polarity is then identified for each pair using the resulting representations. (vi) BMRC: Chen et al. (2021) proposed a transformation of the ASTE task into a multi-turn MRC task and a bidirectional MRC framework to address it. They use non-restrictive, restrictive and sentiment classification queries in a three-turn process to extract triplets. We train and test BMRC on ASTE-Data-V2 over 5 runs with different random seeds.
The experimental results are shown in Table 3. We observe that BERT-based models (their results are in the row above ASTE-RL’s results in Table 3) generally perform better than the non-BERT models. Hence, we only experiment with BERT for our ASTE-RL model. We select our best model for each dataset based on its Dev score. For reproducibility, we report the testing results averaged over 5 runs with different random seeds. ASTE-RL outperforms existing baselines on all four datasets, and significantly outperforms existing baselines on the 15Rest dataset. When compared to the second-best performance for each dataset, we observe an average improvement of 1.68% score across all four datasets, and an improvement of 3.93% on 15Rest. We also observe that our model strikes a balance between the TOP and BMRC models in terms of precision and recall, and hypothesize that this balance can be flexibly shifted depending on to fit dataset requirements, if we generalize , where is the weighted harmonic mean of precision and recall.
In Table 3, we report our results for ASTE-RL without the RL fine tuning step. In this setting, we pre-train our ASTE-RL for 40 epochs as usual and after that we run for another 15 epochs with a learning rate of 5e-6 (as used in RL fine-tuning step). As compared to the RL fine-tuning setting with multinomial sampling, this setting has lower scores with an average decrease of 0.51% over 5 runs with different random seeds. In this setting, our model achieves slightly higher recall, but precision is significantly lower across all four datasets. This might be because multinomial sampling encourages more exploration after the initial pre-training of 40 epochs.
We show the results of ASTE-RL and BMRC in complex situations where there are multiple and overlapping triplets in a sentence in Table 4. For the multiple triplet scenario, we observe that there is a performance increase for 14Rest, 15Rest and 16Rest and a decrease for 14Lap as compared to the case where only one triplet is present in a sentence. For the overlapping triplet scenario, we observe a performance increase for for 15Rest and a decrease for 14Lap, 14Rest and 16Rest.
In general, we observe that ASTE-RL can handle multiple and overlapping triplets in a sentence consistently well due to its hierarchical RL setup, as compared to BMRC. There is a total decrease of 4.76% for multiple triplet extraction for ASTE-RL across all four datasets as compared to 16.21% for BMRC, and a total decrease for overlapping triplet extraction of 16.16% for ASTE-RL as compared to 34.38% for BMRC.
In this work, we propose a novel ASTE-RL model based on hierarchical reinforcement learning (RL) paradigm for aspect sentiment triplet extraction (ASTE). In this paradigm, we treat the aspect and opinion terms as arguments of the sentiment polarities. We decompose the ASTE task into a hierarchy of three subtasks: high-level sentiment polarity extraction, and low-level opinion and aspect term extractions. This approach is good at modeling the interactions between the three tasks and handling multiple and overlapping triplets. We incorporate the multi-turn MRC elements in our model to further improve these interactions. Our proposed model achieves state-of-the-art performance on four challenging datasets for the ASTE task.
This project is supported by the DSO grant no. RTDST190702 awarded to SUTD titled Complex Question Answering.
The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI. https://doi.org/10.1609/aaai.v33i01.33016714
Dropout: a simple way to prevent neural networks from overfitting.
The journal of machine learning research15, 1 (2014), 1929–1958.
, Vol. 99. Citeseer, 1057–1063.