All areas of human life are affected by people’s views. With the sheer amount of reviews and other opinions over the Internet, there is a need for automating the process of extracting relevant information. For machines, however, measuring sentiment is not an easy task, because natural language is highly ambiguous at all levels, and thus difficult to process. For instance, a single word can hardly convey the whole meaning of a statement. Moreover, computers often do not distinguish literal from figurative meaning or incorrectly handle complex linguistic phenomena, such as: sarcasm, humor, negation etc.
In this paper, we take a closer look at two factors that make automatic opinion mining difficult – the problem of representing text information, and sentiment analysis (SA). In particular, we leverage contextual embeddings, which enable to convey a word meaning depending on the context it occurs in. Furthermore, we build a hierarchical multi-layer classifier model, based on an architecture of the Transformer encoder , primarily relying on a self-attention mechanism and bi-attention. The proposed sentiment classification model is language independent, which is especially useful for low-resource languages (e.g. Polish).
We evaluate our methods on various standard datasets, which allows us to compare our approach against current state-of-the-art models for three languages: English, Polish and German. We show that our approach is comparable to the best performing sentiment classification models; and, importantly, in two cases yields significant improvements over the state of the art.
The paper is organized as follows: Section 2 presents the background and related work. Section 3 describes our proposed method. Section 4 discusses datasets, experimental setup, and results. Section 5 concludes this paper and outlines the future work.
2 Related Work
Sentiment classification has been one of the most active research areas in natural language processing (NLP) and has become one of the most popular downstream tasks to evaluate performance of neural network (NN) based models. The task itself encompasses several different opinion related tasks, hence it tackles many challenging NLP problems, see e.g.[17, 21].
2.1 Sentiment Analysis Approaches
The first fully-formed techniques for SA emerged around two decades ago, and continued to be prevalent for several years, until deep learning methods entered the stage. The most straight-forward method, developed in , is based on the number of positive and negative words in a piece of text. Concretely, the text is assumed to have positive polarity if it contains more positive than negative terms, and vice versa. Of course, the term-counting method is often insufficient; therefore, an improved method was proposed in 
Various studies (e.g. 
) have shown that one can determine the polarity of an unknown word by calculating co-occurrence statistics of it. Moreover, classical solutions to the SA problem are often based on lexicons. Traditional lexicon-based SA leverages word-lists, that are pre-annotated with positive and negative sentiment. Therefore, for many years lexicon-based approaches have been utilized when there was insufficient amount of labeled data to train a classifier in a fully supervised way.
In general, ML algorithms are popular methods for determining sentiment polarity. A first ML model applied to SA has been implemented in . Moreover, throughout the years, different variants of NN architectures have been introduced in the field of SA. Especially recursive neural networks 
, such as recurrent neural networks (RNN)[28, 29, 14]
, or convolutional neural networks (CNN)[10, 12] have become the most prevalent choices.
2.2 Vector Representations of Words
One of the principal concepts in linguistics states that related words can be used in similar ways . Importantly, words may have different meaning in different contexts. Nevertheless, until recently it has been a dominant approach (e.g. word2vec , GloVe ) to learn representations such that each and every word has to capture all its possible meanings.
However, lately a new set of methods to learn dynamic representations of words has emerged [19, 8, 25, 26, 5]. These approaches allow each word representation to capture what a word means in a particular context. While every word token has its own vector, the vector can depend on a variable-length sequence of nearby words (i.e. context). Consequently, a context vector is obtained by feeding a neural network with these context word vectors and subsequently encoding them into a single fixed-length vector.
was the very first method to induce contextual word representations by harnessing the power of language modeling. The authors proposed to learn contextual embeddings by pre-training a language model (LM), and then performing task-specific fine-tuning. ULMFiT architecture is based on a vanilla 3-layer Long Short-Term Memory (LSTM) NN without any attention mechanism.
The other contextual embedding model introduced recently is called ELMo (Embeddings from Language Models) . Similarly to ULMFiT, this model uses tokens at the word-level. ELMo contextual embeddings are “deep” as they are a function of all hidden states. Concretely, context-sensitive features are extracted from a left-to-right and a right-to-left 2-layer bidirectional LSTM language models. Thus, the contextual representation of each word is the concatenation of the left-to-right and right-to-left representations as well as the initial embedding (see Fig. 1).
The most recent model – BERT  – is more sophisticated architecturally-wise, as it is a multi-layer masked LM based on the Transformer NN utilizing sub-word tokens. However, as we are bound to use word-level tokens in our sentiment classifier, we leverage the ELMo model for obtaining contextual embeddings. More specifically, by means of ELMo we are able to feed our classifier model with context-aware embeddings of an input sequence. Hence, in this setting we do not perform any fine-tuning of ELMo on a downstream task.
2.3 Self-Attention Deep Neural Networks
The attention mechanism was introduced in 
in 2014 and since then it has been applied successfully to different computer vision (e.g. visual explanation) and NLP (e.g. machine translation) tasks. The mechanism is often used as an extra source of information added on top of the CNN or LSTM model to enhance the extraction of sentence embedding[6, 16]. However, this scenario is not applicable to sentiment classification, since the model only receives a single sentence on input, hence there is no such extra information .
Self-attention (or intra-attention) is an attention mechanism that computes a representation of a sequence by relating different positions of a single sequence. Previous work on sentiment classification has not covered extensively attention-based neural network models for SA (especially using the Transformer architecture ), although some papers have appeared recently [2, 15].
3 The Proposed Approach
, which has provided significant improvements for the neural machine translation task. Unlike RNN or CNN based models, the Transformer is able to learn dependencies between distant positions. Therefore, in this paper we show that attention-based models are suitable for other NLP tasks, such as learning distributed representations and sentiment analysis, and thus are able to improve the overall accuracy.
The architecture of the TSA model and steps to train it can be summarized as follows:
At the very beginning there is a simple text pre-processing method that performs text clean-up and splits text into tokens.
We use contextual word representations to represent text as real-valued vectors.
After embedding the text into real-valued vectors, the Transformer network maps the input sequence into hidden states using self-attention.
Next a bi-attention mechanism is utilized to estimate the interdependency between representations.
A single layer LSTM together with self-attentive pooling compute the pooled representations.
A joint representation for the inputs is later passed to a fully-connected neural network.
Finally, a softmax layer is used to determine sentiment of the text.
3.1 Embeddings and Encoded Positional Information
Non-recurrent models, such as deep self-attention NN, do not necessarily process the input sequence in a sequential manner. Hence, there is no way they can record the position of each word in a sequence, which is an inherent limitation of every such model. Therefore, in the case of the Transformer, the need has been addressed in the following manner – the Transformer takes into account the order of the words in the input sequence by encoding their position information in extra vectors (so called positional encoding vectors) and adding them to input embeddings. There are many different approaches to embed position information, such as learned or fixed positional encodings (PE), or recently introduced relative position representations (RPR) . The original Transformer used sine and cosine functions of different frequencies.
In this work, we explore the effectiveness of applying a modified approach to incorporate positional information into the model, namely using RPR instead of PE. Furthermore, we use global average pooling in order to average the output of the last self-attention layer and prepare the model for the final classification layer.
3.2 The Transformer Encoder
The input sequence is combined with word and positional embeddings, which provide time signal, and together are fed into an encoder block. Matrices for a query Q, a key K and a value V
are calculated and passed to a self-attention layer. Next, a normalization is applied and residual connections provide additional context. Further, a final dense layer with vocabulary size generates the output of the encoder. A fully-connected feed-forward network within the model is a single hidden layer network with a ReLU activation function in between:
3.3 Self-Attention Layer
The self-attention block in the encoder is called multi-head self-attention. A self-attention layer allows each position in the encoder to access all positions in the previous layer of the encoder immediately, and in the first layer all positions in the input sequence. The multi-head self-attention layer employs h parallel self-attention layers, called heads, with different Q, K, V matrices obtained for each head. In a nutshell, the attention mechanism in the Transformer architecture relies on a scaled dot-product attention, which is a function of Q and a set of K-V pairs. The computation of attention is performed in the following order. First, a multiplication of a query and transposed key is scaled through the scaling factor of (Eq. 2)
Next, the attention is produced using the softmax function over their scaled inner product:
Finally, the weighted sum of each attention head and a value is calculated as follows:
3.4 Masking and Pooling
Similar to other sources of data, the datasets used for training and evaluation of our models contain sequences of different length. The most common approach in the literature involves finding a maximal sequence length existing in the dataset/batch and padding sentences that are shorter than the longest one with trailing zeroes. In the proposed TSA model, we deal with the problem of variable-length sequences by using masking and self-attentive pooling. The inspiration for our approach comes from the BCN model proposed in. Thanks to this mechanism, we are able to fit sequences of different length into the final fixed-size vector, which is required for the computation of the sentiment score. The self-attentive pooling layer is applied just after the encoder block.
In this work, we compare sentiment analysis results considering four benchmark datasets in three languages. All datasets are originally split into training, dev and test sets. Below we describe these datasets in more detail.
|PolEmo 2.0-IN||5||5,783||723||722||medical, hotels||Polish|
Stanford Sentiment Treebank (SST)
This collection of movie reviews  from the rottentomatoes.com is annotated for the binary (SST-2) and fine-grained (SST-5) sentiment classification. SST-2 divides reviews into two groups: positive and negative, while SST-5 distinguishes 5 different review types: very positive, positive, neutral, negative, very negative. The dataset consists of 11,855 single sentences and is widely used in the NLP community.
The dataset  comprises online reviews from education, medicine and hotel domains. There are two separate test sets, to allow for in-domain (medicine and hotels) and out-of-domain (products and university) evaluation. The dataset comes with the following sentiment labels: strong positive, weak positive, neutral, weak negative, strong negative, and ambiguous.
This dataset  contains customer reviews of the railway operator (Deutsche Bahn) published on social media and various web pages. Customers expressed their feedback regarding the service of the railway company (e.g. travel experience, timetables, etc.) by rating it as positive, negative, or neutral.
4.2 Experimental Setup
Pre-processing of input datasets is kept to a minimum as we perform only tokenization when required. Furthermore, even though some datasets, such as SST or GermEval, provide additional information (i.e. phrase, word or aspect-level annotations), for each review we only extract text of the review and its corresponding rating.
The model is implemented in the Python programming language, PyTorch111https://pytorch.org and AllenNLP222https://allennlp.org. Moreover, we use pre-trained word-embeddings, such as ELMo , GloVe . Specifically, we use the following ELMo models: Original333https://allennlp.org/elmo, Polish  and German . In the ELMO+GloVe+BCN model we use the following 300-dimension GloVe embeddings: English444http:nlp.stanford.edudataglove.840B.300d.zip, Polish  and German555https://wikipedia2vec.github.io/wikipedia2vec/pretrained. In order to simplify our approach when training the sentiment classifier model, we establish a very similar setting to the vanilla Transformer. We use the same optimizer - Adam with , and
. We incorporate four types of regularization during training: dropout probability, embedding dropout probability , residual dropout probability , and attention dropout probability . We use 2 encoder layers. In addition, we employ label smoothing of value . In terms of RPR parameters, we set clipping distance to .
4.3 Results and Discussion
In Table 2, we summarize experimental results achieved by our model and other state-of-the-art systems reported in the literature by their respective authors.
|Constituency Tree-LSTM ||88.0||51.0||-||-|
|Polish BERT ||-||-||88.1||-|
We observe that our models, baseline and ELMo+TSA, outperform state-of-the-art systems for all three languages. More importantly, the presented accuracy scores indicate that the TSA model is competitive and for two languages (Polish and German) achieves the best results. Also noteworthy, in Table 2, there are two models that use some variant of the Transformer: SSAN+RPR  uses the Transformer encoder for the classifier, while Polish BERT  employs Transformer-based language model introduced in . One of the reasons why we achieve higher score for the SST dataset might be that the authors of SSAN+RPR used word2vec embeddings , whereas we employ ELMo contextual embeddings . Moreover, in our TSA model we use not only self-attention (as in SSAN+RPR) but also a bi-attention mechanism, hence this also should provide performance gains over standard architectures.
In conclusion, comparing the results of the models leveraging contextual embeddings (CoVe+BCN, Polish BERT, ELMo+GloVe+BCN and ELMo+TSA) with the rest of the reported models, which use traditional distributional word vectors, we note that the former category of sentiment classification systems demonstrates remarkably better results.
5 Conclusion and Future Work
We have presented a novel architecture, based on the Transformer encoder with relative position representations. Unlike existing models, this work proposes a model relying solely on a self-attention mechanism and bi-attention. We show that our sentiment classifier model achieves very good results, comparable to the state of the art, even though it is language-agnostic. Hence, this work is a step towards building a universal, multi-lingual sentiment classifier.
In the future, we plan to evaluate our model using benchmarks also for other languages. It is particularly interesting to analyze the behavior of our model with respect to low-resource languages. Finally, other promising research avenues worth exploring are related to unsupervised cross-lingual sentiment analysis.
-  KLEJ benchmark. Note: Accessed: 2020-01-20 External Links: Cited by: §4.3, Table 2.
-  (2018) Self-attention: a better building block for sentiment analysis neural network classifiers. In Proceedings of the 9th EMNLP Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pp. 130–139. Cited by: §2.3, §4.3, Table 2.
-  (2014) Neural machine translation by jointly learning to align and translate. arXiv. Cited by: §2.3.
-  (2019) A repository of polish NLP resources. Note: GithubAccessed: 2020-01-20 External Links: Cited by: §4.2.
-  (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Cited by: §2.2, §2.2, §4.3.
-  (2016) Attentive pooling networks. arXiv. Cited by: §2.3.
-  (1957) A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis. Cited by: §2.2.
-  (2018) Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 328–339. Cited by: §2.2, §2.2.
-  (2019) ELMo embeddings for polish. Note: CLARIN-PL digital repository External Links: Cited by: §4.2.
-  (2014) A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pp. 655–665. Cited by: §2.1, Table 2.
-  (2006) Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence 22, pp. 110–125. Cited by: §2.1.
-  (2014) Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751. Cited by: §2.1, Table 2.
-  (2019) Multi-level sentiment analysis of PolEmo 2.0: extended corpus of multi-domain consumer reviews. In Proceedings of the 23rd Conference on Computational Natural Language Learning, pp. 980–991. Cited by: §4.1.
-  (2016) Ask me anything: dynamic memory networks for natural language processing. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, Vol. 48, pp. 1378–1387. Cited by: §2.1, Table 2.
-  (2018) Importance of self-attention for sentiment analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP, pp. 267–275. Cited by: §2.3.
-  (2017) A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations, Cited by: §2.3.
-  (2012) Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, Morgan & Claypool Publishers. Cited by: §2.
-  (2019) German ELMo Model. Note: Accessed: 2020-01-20 External Links: Cited by: §4.2.
-  (2017) Learned in translation: contextualized word vectors. In Advances in Neural Information Processing Systems 30, pp. 6294–6305. Cited by: §2.2, §3.4, Table 2.
-  (2013) Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pp. 3111–3119. Cited by: §2.2, §4.3.
-  (2016) Sentiment analysis: detecting valence, emotions, and other affectual states from text. In Emotion measurement, pp. 201–237. Cited by: §2.
-  (2002) Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pp. 79–86. Cited by: §2.1.
-  (2014) Global belief recursive neural networks. In Advances in Neural Information Processing Systems 27, pp. 2888–2896. Cited by: §2.1.
-  (2014) Glove: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543. Cited by: §2.2, §4.2.
-  (2018) Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2227–2237. Cited by: §2.2, §2.2, §4.2, §4.3.
-  (2018) Improving language understanding by generative pre-training. Cited by: §2.2.
-  (2018) Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 464–468. Cited by: §3.1.
-  (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642. Cited by: §2.1, §4.1, Table 2.
-  (2015) Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp. 1556–1566. Cited by: §2.1, Table 2.
-  (2010) From frequency to meaning: vector space models of semantics. J. Artif. Intell. Res. 37, pp. 141–188. Cited by: §2.1.
-  (2002) Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 417–424. Cited by: §2.1.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, pp. 5998–6008. Cited by: §1, §2.3, §3.
-  (2017) GermEval 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback. In Proceedings of the GermEval 2017, pp. 1–12. Cited by: §4.1, Table 2.