Deep Memory Networks for Attitude Identification

01/16/2017 ∙ by Cheng Li, et al. ∙ University of Michigan 0

We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many scenarios, it is critical to identify people’s attitudes 111The way you think and feel about someone or something,” as defined by Merriam-Webster. towards a set of entities. Examples include companies who want to know customers’ opinions about their products, governments who are concerned with public reactions about policy changes, and financial analysts who identify daily news that could potentially influence the prices of securities. In a more general case, attitudes towards all entities in a knowledge base may be tracked over time for various in-depth analyses.

Different from a sentiment which might not have a target (e.g., “I feel happy”) or an opinion which might not have a polarity (e.g., “we should do more exercise”), an attitude can be roughly considered as a sentiment polarity towards a particular entity (e.g., “WSDM is a great conference”). Therefore, the task of attitude identification has been conventionally decomposed into two separate subtasks: target detection that identifies whether an entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards the identified target, usually into three categories: positive, negative, and neutral.

Solving the two subtasks back-to-back is by no means unreasonable, but it may not be optimal. Specifically, intrinsic interactions between the two subtasks may be neglected in such a modularized pipeline. Indeed, signals identified in the first subtask – both the words that refer to the target and the positions of these words, could provide useful information for the polarity of sentiments. For example, the identified target in the sentence “this Tiramisu cake is    

” indicates a high probability that a sentimental word would appear in the blank and is highly likely to be related to flavor or price. On the other hand, sentimental expressions identified in the second subtask and their positions could in turn provide feedback to the first task and signal the existence of a target. For example, the positive sentiment in “

the new Keynote is user friendly ” provides good evidence that “Keynote” is a software (the target) instead of a speech (not the target). In addition, models learned for certain targets and their sentiments may share some important dimensions with each other while differ on other dimensions. For example, two targets food and service may share many sentimental expressions, but the sentence “we have been waiting for food for one hour” is clearly about the service instead of the food. Failure to utilize these interactions (both between tasks and among targets) may compromise the performance of both subtasks.

Recent developments of deep learning has provided the opportunity of a better alternative to modularized pipelines, in which machine learning and natural language processing tasks can be solved in an


manner. With a carefully designed multi-layer neural network, learning errors backpropagate from upper layers to lower layers, which enables deep interactions between the learning of multi-grained representations of the data or multiple subtasks. Indeed, deep learning has recently been applied to target-specific sentiment analysis (mostly the second subtask of attitude identification) and achieved promising performance, where a given target is assumed to have appeared exactly once in a piece of text and the task is to determine the polarity of this text 

[38, 45, 36]. A deep network structure learns the dependency between the words in the context and the target word.

In another related topic known as multi-aspect sentiment analysis, where the goal is to learn the fine-grained sentiments on different aspects of a target, some methods have attempted to model aspects and sentiments jointly. Aspects are often assumed to be mentioned explicitly in text, so that the related entities can be extracted through supervised sequence labeling methods [21, 19, 46]; aspects mentioned implicitly can be extracted as fuzzy representations through unsupervised methods such as topic models [22, 41, 32]

. While unsupervised methods suffer from low accuracy, it is usually difficult for supervised methods, like support vector machines (SVMs) 

[17], to interleave aspect extraction and sentiment classification.

In this paper, we show that the accuracy of attitude identification can be significantly improved through effectively modeling the interactions between subtasks and among targets. The problem can be solved with an end-to-end machine learning architecture, where the two subtasks are interleaved by a deep memory network. The proposed model, called the AttNet, also allows different targets to interact with each other, by sharing a common semantic space and simultaneously keep their own space, making it possible for all targets to be learned in a unified model. The proposed deep memory network outperforms models that do not consider the subtask or target interactions, including conventional supervised learning methods and state-of-the-art deep learning models.

The rest of the paper is organized as follows. Section 2 summarizes the related literature. In Section 3, we describe how the deep neural network is designed to incorporate the interaction both between subtasks and among targets. We present the design and the results of empirical experiments in Section 4 and Section 5, and then conclude the paper in Section 6.

2 Related work

Sentiment analysis has been a very active area of research [27, 30]. While sentiment in general does not need to have a specific target, the notion of attitude is usually concerned with a sentiment towards a target entity (someone or something). As one category of sentiment analysis, there is much existing work related to attitude identification, which generally takes place in three domains: multi-aspect sentiment analysis in product reviews, stance classification in online debates, and target-dependent sentiment classification in social media posts. Below we categorize existing work by the problem settings, e.g., whether the target is required to be explicitly mentioned.

Explicitly tagged targets. There has been a body of work that classifies the sentiment towards a particular target that is explicitly mentioned and tagged in text, mostly applied to social media text such as Tweets. Due to the short length of Tweets, many models assume that targets appear exactly once in every post. Jiang et al. [14] developed seven rule-based target-dependent features, which are fed to an SVM classifier. Dong et al. [6] proposed an adaptive recursive neural network that propagates sentiment signals from sentiment-baring words to specific targets on a dependence tree. Vo et al. [38] split a Tweet into a left context and a right context according to a given target, and used pre-trained word embeddings and neural pooling functions to extract features. Zhang et al. [45] extended this idea by using gated recursive neural networks. The paper most relevant to ours is Tang et al. [36], which applied Memory Networks [35] to the task of multi-aspect sentiment analysis. Aspects are given as inputs, assuming that they have already been annotated in the text. Their memory network beat all LSTM-based networks but did not outperform SVM with hand-crafted features.

Model structures for target-dependent sentiment classification heavily rely on the assumption that the target appears in the text explicitly, and exactly once. These models could degenerate when a target is implicitly mentioned or mentioned multiple times. Additionally, they do not consider the interactions between the subtasks (target detection and sentiment classification) or among the targets.

Given target, one per instance.

In the problem of stance classification, the target, mentioned explicitly or implicitly, is given but not tagged in a piece of text. The task is only to classify the sentiment polarity towards that target. Most methods train a specific classifier for each target and report performance separately per target. Many researchers focus on the domain of online debates. They utilized various features based on n-grams, part of speech, syntactic rules, and dialogic relations between posts 

[40, 10, 7, 31]. The workshop SemEval-2016 presented a task on detecting stance from tweets [24], where an additional category is added for the given target, indicating the absence of sentiment towards the target. Mohammad et al. [25] beat all teams by building an SVM classifier for each target.

As stance classification deals with only one given target per instance, it fails to consider the interaction between target detection and sentiment classification. Furthermore, the interplay among targets is ignored by training a separate model per target.

Explicit targets, not tagged. In the domain of product reviews, a specific aspect of a product could be considered as a target of attitudes. When the targets appear in a review but are not explicitly tagged, they need to be extracted first. Most work focuses on extracting explicitly mentioned aspects. Hu et al. [12] extracted product aspects via association mining, and expanded seed opinion terms by using synonyms and antonyms in WordNet. When supervised learning approaches are taken, both tasks of aspect extraction and polarity classification can be cast as a binary classification problem [17], or as a sequence labeling task and solved using sequence learning models such as conditional random fields (CRFs) [21, 19]

or hidden Markov models (HMMs) 


Implicit targets.

There are studies that attempt to address the situation when aspects could be implicitly mentioned. Unsupervised learning approaches like topic modeling treat aspects as topics, so that topics and sentiment polarity can be jointly modeled 

[22, 41, 32]. The workshop of SemEval-2015 announced a task of aspect based sentiment analysis [28], which separates aspect identification and polarity classification into two subtasks. For aspect identification, top teams cast aspect category extraction as a multi-class classification problem with features based on n-grams, parse trees, and word clusters.

Although aspect identification and polarity classification are modeled jointly here, it is hard to train unsupervised methods in an end-to-end way and directly optimize the task performance.

Deep learning for sentiment analysis. In the general domain of sentiment analysis, there has been an increasing amount of attention on deep learning approaches. In particular, Bespalov et al. [1] used Latent Semantic Analysis to initialize the word embedding, representing each document as the linear combination of n-gram vectors. Glorot et al. [9]

applied Denoising Autoencoders for domain adaptation in sentiment classification. A set of models have been proposed to learn the compositionality of phrases based on the representation of children in the syntactic tree 

[33, 34, 11]. These methods require parse trees as input for each document. However, parsing does not work well on user generated contents, e.g., tweets [8]. Liu et al. [20]

used recurrent neural networks to extract explicit aspects in reviews.

Compared to the existing approaches, our work develops a novel deep learning architecture that emphasizes the interplay between target detection and polarity classification, and the interaction among multiple targets. These targets can be explicitly or implicitly mentioned in a piece of text and do not need to be tagged a priori.

3 AttNet for Attitude Identification

We propose an end-to-end neural network model to interleave the target detection task and the polarity classification task. The target detection task is to determine whether a specific target occurs in a given context either explicitly or implicitly. The polarity classification task is to decide the attitude of the given context towards the specific target if the target occurs in the context. Formally, a target detection classifier is a function mapping pairs of targets and contexts into binary labels, context, target . A polarity classifier is a function mapping pairs of targets and contexts into three attitude labels, context, target . For example, given a context, if everyone has guns, there would be just mess, and a target, gun control, the correct label is present for the target detection task and positive for polarity classification.

Our model builds on the insight that the target detection task and the polarity classification task are deeply coupled in several ways.

  • The polarity classification depends on the target detection because the polarity is meaningful only if the target occurs in the context. Reversely, the polarity classification task provides indirect supervision signals for the target detection task. For example, if the attitude label positive is provided for a context-target pair, the target must have occurred in the context following the definition. Such indirect supervision signals are useful especially when the target only occurs in the context implicitly.

  • The signal words in the target detection and the polarity classification task are usually position-related: the signal words to determine the polarity are usually the surrounding words of the signal words to detect the target. Moreover, when a context has multiple targets, the signal words usually cluster for different targets [12, 30].

  • Different targets interact in both the target detection task and the polarity classification task. Intuitively, some context words could mean the same for many targets, while some context words could mean differently for different targets.

Specifically, our model introduces several techniques building on the interaction between the target detection task and the polarity classification task accordingly.

  • The output of the target detection is concatenated as part of the input of the polarity classification task to allow polarity classification to be conditioned on target detection. Polarity classification labels are also used to train the target detection classifier by back-propagating the errors of the polarity classification to the target detection end-to-end.

  • The attention of polarity classification over context words are preconditioned by the attention of target detection. The polarity classification task benefits from such precondition especially when there are multiple targets in the context.

  • Target-specific projection matrices are introduced to allow some context words to have similar representations among targets and other context words to have distinct representations. These matrices are all learned in an end-to-end fashion.

We propose a deep memory network model, called the AttNet, which implements the above motivation and ideas. In the rest of this section, we give a brief introduction to the memory network model, followed by a description of a single layer version of the model. Then we extend the expressiveness and capability of the model by stacking multiple layers.

3.1 Background: Memory Networks

As one of the recent developments of deep learning, memory networks [35] have been successfully applied to language modeling, question answering, and aspect-level sentiment analysis [36], which generates superior performance over alternative deep learning methods, e.g., LSTM.

Given a context (or document, e.g., “we have been waiting for food for one hour”) and a target (e.g., service ), a memory network layer converts the context into a vector representation by computing a weighted sum of context word vector representations. The weight is a score that measures the relevance between the context word and the target (e.g., a higher score between the words waiting and service). The vector representation of the context is then passed to a classifier for target detection or polarity classification. An attractive property is that all parameters, including the target embeddings, context word embeddings and scores, are end-to-end trainable without additional supervision signals.

AttNet improves the original memory network models for attitude identification by (1) interleaving the target detection and polarity classification subtasks and (2) introducing target-specific projection matrices in representation learning, without violating the end-to-end trainablity.

3.2 Single Layer AttNet

We begin by describing AttNet in the single layer case, shown in Figure 1. Hereafter for simplicity, we refer to the task of target detection as TD, and polarity classification as PC.

Figure 1: A single layer version of AttNet. Key submodules are numbered and correspondingly detailed in the text.
(1) Target Embedding

Each query target is represented as a one-hot vector, , where is the number of targets. All targets share a target embedding matrix , where is the embedding dimensionality. The matrix converts a target into its embedding vector , which is used as the input for the TD task.

(2) Input Representation and Attention for TD

We compute match scores between the context (or document) and the target for content-based addressing. The context is first converted into a sequence of one-hot vectors, , where is the one-hot vector for the -th word in the context and is the number of words in the dictionary. The entire set of are then embedded into a set of input representation vectors by:

where is the word embedding matrix shared across targets, superscript stands for the TD task, and is a target-specific projection matrix for target , which allows context words to share some semantic dimensions for some targets while vary for others.

In the embedding space, we compute the match scores between the target input representation and each context word representation by taking the inner product followed by a softmax, , where . In this way, is a soft attention (or probability) vector defined over the context words.

(3) Output Representation for TD

A different embedding matrix, , is introduced for flexibility in computing the output representation of context words by:

The response output vector is then a sum over the outputs , weighted by the attention vector from the input: .

(4) Interleaving TD and PC

In the single layer case, the sum of the output vector and the target query embedding is then passed to the PC task, .

(5) Input Representation and Attention for PC

Similar to the TD task, we convert the entire set of into input representation vectors by:

where is the input embedding matrix for PC. We use separate embedding matrices and for TD and PC, as the words could have different semantics in the two tasks. For similar reasons, we use different projection matrices and for the two tasks.

Given the polarity input representation , we also compute the soft attention over the context words for polarity identification, .

(6) Output Representation for PC

There is also one corresponding output vector in PC for each :

where is the polarity output embedding matrix. It has been observed that sentiment-baring words are often close to the target [12, 30]. Based on this observation, attentions, or positions of important words that identify the target in the first module, could provide prior knowledge to learn the attention of the second module. Therefore we compute the final attention vector as a function of original attentions of both tasks:


where controls the importance of the second term, and is a moving average function which shifts attentions from words of high values to their surrounding neighbors. The output vector is .

(7) Prediction for TD and PC

To predict whether a target presents, the sum of the output vector of target classification and the target query vector is passed through a weight matrix (2 is the number of classes: present, absent) and a softmax operator to produce the predicted label, a vector of class probabilities: .

Similarly, the sum of the output vectors of PC and its input vector is then passed through a weight matrix and a softmax operator to produce the predicted attitude label vector, .

3.3 Multiple Layer AttNet

We now extend our model to stacked multiple layers. Figure 2 shows a three layer version of our model. The layers are stacked in the following way:

Functionality of Each Layer

For TD, the input to the -th layer is the sum of the output and the input from the

-th layer, followed by a sigmoid nonlinearity:

, where

is the sigmoid function and

is a learnable linear mapping matrix shared across layers. For the PC task, the input to the first layer is the transformed sum from the last layer of the TD module, , where is the number of stacked layers in the TD task. Thus the prediction of polarity would depend on the output of the TD task and reversely the TD task would benefit from indirect supervision from the PC task by backward propagation of errors. Similarly for PC, the input to the -th layer is the sum of the output and the input from the -th layer, followed by a sigmoid nonlinearity: .

Figure 2: A three layer version of our model. Both the TD and PC modules have three stacked layers.
Attention for PC

In the single layer case, the attention for PC is based on that of the TD module. When layers are stacked, all layers of the first module collectively identify important attention words to detect the target. Therefore we compute the averaged attention vector across all layers in the TD module . Accordingly for -th layer of the PC module, the final attention vector is , and the output vector is .

Tying Embedding and Projection Matrices

The embedding matrices and projection matrices are constrained to ease training and reduce the number of parameters [35]. The embedding matrices and the projection matrices are shared for different layers. Specifically, using subscription denote the parameters in the -th layer, for any layer , we have , , , , and .

Predictions for TD and PC

The prediction stage is similar to the single-layer case, with the prediction based on the output of the last layer (for TD) and (for PC). For the TD task, , while for PC, .

3.4 End-to-End Multi-Task Training

We use cross entropy loss to train our model end-to-end given a set of training data , where is the -th context (or document), is the -th target, and are the ground-truth labels for the TD and the PC tasks respectively. The training is to minimize the objective function:

where is a vector of predicted probability for each class of TD, selects the -th element of , equals to 1 if equals to class present and 0 otherwise. Note that when a target is not mentioned in a given context, the polarity term plays no role in the objective because the value of is zero.

4 Experiment Setup

In the experiments, we compare AttNet to conventional approaches and alternative deep learning approaches on three real world data sets, and we show the superior performance of our model. We also experiment with variants of AttNet as credit assignments for the key components in our model.

4.1 Data Sets

We examine AttNet on three domains that are related to attitude classification: online debates (Debates), multi-aspect sentiment analysis on product review (Reviews), and stance in tweets (Tweets).

Debates. This data set is from the Internet Argument Corpus version 2222 The data set consists of political debates on three Internet forums3334forums(,
ConvinceMe( and
. In these forums, a person can initiate a debate by posting a topic and taking positions such as favor vs. against. Examples of topics are gun control, death penalty and abortion. Other users participate in these debates by posting their arguments for one of the sides.

Tweets. This data set comes from a task of the workshop SemEval-2016 on detecting stance from tweets [24]. Targets are mostly related to ideology, e.g., atheism and feminist movement444Since there are less than 10 tweets with neutral stance, we only consider positive and negative attitude by discarding these neutral tweets..

Review. This data set includes reviews of restaurants and laptops from SemEval 2014 [29] and 2015 [28], where subtasks of identifying aspects and classifying sentiments are provided. We merge two years’ data to enlarge the data set, and only include aspects that are annotated in both years.

To guarantee enough training and test instances, for all the data sets we filter out targets mentioned in less than 100 documents. The original train-test split is used if provided, otherwise we randomly sample 10% data into test set. We further randomly sample 10% training data for validation. Text pre-processing includes stopword removal and tokenization by the CMU Twitter NLP tool [8]. The details of the data sets are shown in Table 1.

Data set Set #docs #pos #neg #neutral #absent
Debates train 24352 13891 10711 0 0
val 2706 1530 1203 0 0
test 3064 1740 1371 0 0
Tweets train 2614 682 1253 0 679
val 291 71 142 0 78
test 1249 304 715 0 230
Review train 5485 2184 1222 210 2336
val 610 260 121 17 277
test 1446 496 455 60 634

#pos means the number of documents with positive sentiment for each target. If one document contains positive sentiment towards two targets, it will be counted twice. #absent counts the number of documents without any attitude towards any target.

Table 1: Statistics of each data set.

4.2 Metrics

For our problem, each data set has multiple targets, and each target can be classified into one of the outcomes: absent (do not exist), neutral, positive, and negative. If we treat each outcome of each target as one category

, we can adopt common metrics for multi-class classification. Since most targets do not appear in most instances, we have a highly skewed class distribution, where measures like accuracy are not good choices 


Apart from precision, recall and AUC, we also use the macro-average F-measure [44]. Let and be recall and precision respectively for a particular category , , , where are the number of true positive, false positive, and false negative for category . Given and

, F-score of category

is computed as

. The macro-average F-score is obtained by taking the average over all categories. The final precision and recall are also averaged over individual categories. There is another micro-averaged F-measure, which is equivalent to accuracy. Therefore, we do not include it.

4.3 Baselines

We compare baseline methods from two large categories: conventional methods and alternative deep learning methods.

Each baseline method has various configurations, based on whether: (1) it trains a single model or two separate models for the and subtasks, and (2) it trains one universal model for all targets or separated models for different targets. To distinguish different configurations, we append -sgl when using a single model for the two subtasks, and -sep when using separate models for each subtask. Taking SVM as an example, SVM-sgl directly classify targets into four classes: absent, neutral, positive, and negative. In contrast, SVM-sep first classifies each target into two classes: absent, present, and use a second model to classify polarity: neutral, positive, and negative. Moreover, we append -ind when individual targets are trained on separate models, or -all when one model is trained for all targets.

4.3.1 Conventional baselines

SVM+features. SVM using a set of hand-crafted features has achieved the state-of-the-art performance in stance classification of SemEval 2016 task [25], online debates [10], and aspect-based sentiment analysis [36]. SVM has also demonstrated superior performance in document-level sentiment analysis compared with conditional random field methods [42]. Therefore we include all features from these methods that are general across domains, and use a linear kernel SVM implemented by LIBSVM [2] for classification. We list the set of features:

Document info: basic counting features of a document, including the number of characters, the number of words, the average words per document and the average word length.

N-grams: word unigrams, bigrams, and trigrams. We insert symbols that represent the start and end of a document to capture cue words [40].


: the number of positive and negative words counted from the NRC Emotion Lexicon 

[26], Hu and Liu Lexicon [12], and the MPQA Subjectivity Lexicon [43].

Target: presence of the target phrase in the text. Furthermore, if the target is present, we generate a set of target dependent features according to [14]. To get a sense of these features, for the target iPhone in text I love iPhone, a feature love_arg will be generated.

POS: the number of occurrences of each part-of-speech tag (POS).

Syntactic dependency: a set of triples obtained by Stanford dependency parser [5]. More specifically, the triple is of the form , where represents the grammatical relation between word and , e.g., is subject of.

Generalized dependency: the first word of the dependency triple is “backed off” to its part-of-speech tag [39]. Additionally, words that appear in sentiment lexicons are replaced by positive or negative polarity equivalents [39].

Embedding: the element-wise averages of the word vectors for all the words in a document. We use three types of word embeddings. Two of them are from studies on target-dependent sentiment classification [38, 45], which are the skip-gram embeddings of Mikilov et al. [23] and the sentiment-driven embeddings of Tang et al. [37]. The first type of embedding is trained on 5 million unlabeled tweets that contain emoticons, which guarantees that more sentiment related tweets are included. The second type of embedding is of 50 dimensions, which is publicly available555 The third type of embedding is also 50-dimensional, released by Collobert et al. [4] and trained on English Wikipedia666

Word cluster

: the number of occurrences of word clusters for all words in text. We perform K-means clustering on the word vectors.

Apart from two standard SVM model configurations, SVM-sep-ind and SVM-sgl-ind, we also compare with a hybrid model SVM-cmb-ind, whose prediction is absent if SVM-sep-ind says so, and otherwise it follows the decisions of SVM-sgl-ind.777SVM-sgl-all and SVM-sep-all have performance degeneration due to the interference of different targets. We do not include their results for simplicity.

4.3.2 Deep Learning Baselines

BiLSTM, MultiBiLSTM and Memnet. We also compare to the bidirectional LSTM (BiLSTM) model, the state-of-the-art in target-dependent sentiment classification [45]. Their variant of BiLSTM model assumes that the given target always appears exactly once, and can be tagged in text by starting and ending offsets. When such assumption fails, their model is equivalent to standard BiLSTM. We include the standard multi-layered bidirectional LSTM (MultiBiLSTM[13] as an extension. Recently, Tang et al. [36] applied memory networks (Memnet) to multi-aspect sentiment analysis. Their results show memory network performs comparably with feature based SVM and outperforms all LSTM-related methods in their tasks.

CNN and ParaVec

. We include related deep learning techniques beyond the sentiment analysis domain, such as the convolutional neural networks (

CNN[15] and ParaVec [18]. ParaVec requires a huge amount of training data to reach decent performance. We enhance the performance of the ParaVec model by training over the merged training set of all data sets, plus the 5 million unlabeled tweets mentioned above.

Parser-dependent deep learning methods have also been applied to sentiment analysis [33, 34, 11]. These models are limited in our attitude identification problem for two reasons. First, they often work well with phrase-level sentiment labels, but only document-level sentiment labels are provided in our problems. Second, their parsers do not extend to user generated content, such as Tweets and Debates [8]. Our preliminary results show these methods work poorly on our problems and we do no include their results for simplicity.

For all deep learning methods, we report their -sep-all and -sgl-all version. Unlike SVM, deep methods perform quite well when using a single model for all targets, by casting the problem as a multi-task multi-class classification. Though not scalable, for the strongest baselines (BiLSTM and MultiBiLSTM), we in addition train a separate model for each target. Since -sep-ind works better than -sgl-ind, we only report the former one. The variants of memory networks are detailed below.

4.4 Variants of AttNet

To assign the credit of key components in our model, we construct a competing model AttNet-. Unlike our proposed model, for AttNet- the target-specific projection matrices and

are replaced by the identity matrix and are fixed during training. Thus the

AttNet- model interleave the target detection and polarity classification subtasks, but do not consider the interactions among targets. We refer our proposed model as AttNet, which allows the projection matrices to be learned during training, and thus word semantics could vary for targets.

For AttNet-, we report two settings in our experiments: AttNet-ind and AttNet-all. The former makes all targets share the same embedding, while the latter separates the embedding space completely for each target, i.e., targets are trained on separate models.

Hyper-parameters Tweets Review Debates
L1 coeff 1e-6 1e-4 1e-6
L2 coeff 1e-4 1e-8 1e-8
init learning rate 0.05 0.01 0.005
#layers(target) 4 4 3
#layers(sentiment) 4 8 6
prior attention 0.5 0.1 0.5
Table 2: Hyper-parameters for our method AttNet.
Tweets Review Debates
Method F-score AUC Precision Recall F-score AUC Precision Recall F-score AUC Precision Recall
Proposed methods
Table 3: Performance of competing methods: AttNet achieves top performance.

4.5 Training Details

All hyper-parameters are tuned to obtain the best performance of F-score on validation set. The candidate embedding size set is for LSTM-related methods, SVM and CNN. The candidate number of clusters for K-means is . The candidate relaxing parameter C for SVM model is . The CNN model has three convolutional filter sizes and their filter size candidates are , , , , and the candidate number of filters is . For ParaVec, we experiment with both skip-gram model or bag-of-words model, and select the hidden layer size from .

We explored three weight initialization methods of word embeddings for LSTM-related and CNN baselines: (1) sampling weights from a zero-mean Gaussian with 0.1 standard deviation; (2) initializing from the pre-trained embedding matrix, and (3) using a fixed pre-trained embedding matrix.

Memory network models, including our model, are initialized by sampling weights from a zero-mean Gaussian with unit standard deviation. The candidate number of memory layers is . The prior attention parameter of our model is selected from

. The capacity of memory, which has limited impact on the performance, is restricted to 100 words without further tuning. A null symbol was used to pad all documents to this fixed size. To reduce the model complexity, the projection matrices are initialized in the way that each column is a one-hot vector.

Deep learning models are optimized by Adam [16]. The initial learning rate is selected from , and L1-coefficient and L2-coefficient of regularizers are selected from . The hyper-parameters of our model AttNet+ for different data sets are listed in Table 2.

5 Experiment results

5.1 Overall Performance

The overall performance of all competing methods over data sets are shown in Table 3888The performance of all methods on the Review data set is lower than the other two because Review data set handles three polarities while the others only need to handle two polarities as shown in Table 1.. Evaluating with F-score and AUC, we make the following observations. Our method AttNet outperforms all competing methods significantly. This empirically confirms that interleaving target detection and polarity classification subtasks combined with target-specific representations could benefit attitude identification.

The variants of our model, AttNet-all and AttNet-ind, have already gained significant improvement over the strongest baselines on all data sets. More importantly, the two methods significantly outperform the Memnet-sep-all and Memnet-sep-all baselines, which do not interleave the subtasks. Such empirical findings cast light on that interleaving the subtasks indeed improves the attitude identification performance. In contrast, separating the two subtasks of attitude identification could lead to performance degeneration.

Our model AttNet also outperforms its variants, AttNet-all and AttNet-ind, on all data sets. The performance advantage of AttNet comes from the adoption of target-specific projection matrices in representation learning, since these matrices are the only differences between AttNet and AttNet-. Even though the improvement from adopting target-specific projection matrices is not as marked as from the techniques of interleaving the subtasks, the improvement is still significant. This result confirms that attitude identification could benefit from the learned representations that share the same semantics for many targets but vary for some targets.

By examining the precision and recall results, we find that the superior performance of our model is mainly from the significant improvement of recall, though both precision and recall are improved significantly on the Debates data set.

5.2 Performance on Subtasks

We have established that our models outperform competing methods on all data sets. In order to further assign the credits of the improvement of our methods, we evaluate our models on the two subtasks: target detection and polarity classification, with results given in Table 4 and 5 respectively. Since different configurations of the same method work similarly, we only present the results where separate models are trained for each task. It can be seen from Table 4 that the target detection task is relatively easy, as all methods can achieve quite high scores. This also means that it is hard to improve any further on this task. In terms of precision and recall, SVM performs quite well on the precision metric, especially for the Review data set. While most deep learning methods focus more on enhancing recall. When considering both precision and recall, most deep learning methods are still better, as the F-score shows.

Tweets Review Debates
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
Table 4: Performance on target detection for -sep models.

The second task is only evaluated on documents with ground-truth sentiments towards particular targets, with F-scores averaged over all targets and three sentiment classes: positive, negative, and neutral. We make several notes for this evaluation. (1) In order to achieve a high score in the second task, it is still important to classify correctly the presence of a target. (2) In general the scores for all methods in the second task are low, due to that the classifier might predict a target as absent, even though the ground-truth class can only be drawn from three sentiment classes. (3) It is possible for a method to outperform SVM on both tasks, while still obtain close results when two tasks are evaluated together. This results from our method of evaluation on the second task, where a document is included only when it expresses sentiment towards a particular target.

Based on the results from Table 5, we can see that the percentage of improvement over SVM is much higher than that of the first task. Intuitively, the sentiment task requires better modeling of the non-linear interaction between the target and the context, while for the target detection task, presence of certain signal words might be enough.

Tweets Review Debates
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
, , ,
Table 5: Performance on polarity classification for -sep models.

5.3 Training Time Analysis

In order to measure the training speed of each model, we train all deep learning methods on a server with a single TITAN X GPU. For SVM, it is trained on the same server with a 2.40 GHz CPU and 120 G RAM. All methods are trained sequentially without parallelization.

SVM can finish training in less than one hour, but its required training time increase linearly as the number of targets increases.

For all deep learning methods, the number of epochs required for training is in general very close, which is around 20 epochs averaged over all data sets.

Comparing the training time per epoch, ParaVec and CNN are much faster than other methods (less than 5 seconds / epoch). Despite the training efficiency, their effectiveness is a problem. When all targets share a single model, LSTM has a speed of 200 seconds/epoch, while standard memory networks have a speed of 150 seconds/epoch. Memory networks in many tasks, e.g., language modeling, are much faster than LSTM, due to the expensive recursive operation of LSTM. However in our problem setting, each target has to be forwarded one by one for every document, lowering the efficiency of memory networks. When individual targets are trained on separate LSTMs, LSTMs require far more training time (1000 seconds/epoch).

AttNet consumes 200 seconds per epoch. Comparing to standard memory networks, AttNet produces some additional overhead by introducing the interaction between subtasks, and by adding a projection matrix. But this overhead is small.

The efficiency of deep learning methods could be improved by parallelization. Since there are already many work on this topic, which could increase the speed without sacrificing effectiveness, we do not go further into this direction.

Summary: empirical experiments demonstrated that the proposed deep memory networks, AttNet and its variants outperforms conventional supervised learning methods. This is promising but perhaps not surprising given the success of deep learning in general. It is encouraging to notice that AttNet also improves the state-of-the-art deep learning architectures. This improvement is statistically significant, and can be observed for both subtasks and for attitude identification as a whole. The improvement of effectiveness does not compromise learning efficiency.

5.4 Visualization of attention

In order to better understand the behavior of our models, we compare the attention weights given by our model AttNets and the competing method Memnet.

Figure 3: Visualization of learned attention. Red patches highlighting the top half of the text indicate model’s attention weight in the target detection task, while green ones highlighting the bottom half show the polarity classification task. Darker colors indicate higher attentions. Truth: service+ means that the ground-truth sentiment towards service is positive, while Predict + given ambience gives the predicted positive sentiment given the query target ambience.

Figure 3 (a) and (b) list some examples of word attentions generated by different models for the same set of sentences in the test set. In the first sentence, both them and guns are found as targets by AttNets, while words like mess and policy are found as sentiment words. Though Memnet correctly identifies the existence of the attitude towards gun control, it fails to find important words to classify the polarity of sentiment. This suggests the importance of interleaving the two tasks – successfully identifying mentioned targets could offer clues about the finding of sentiment words for the second task.

The second sentence is from a review of a restaurant, when ambience is used as the query target. We can see that the target detection module of AttNets captures the word decor, which signals the presence of the target ambience. The polarity classification module then focuses on extracting sentiment words associated with the target. However for the baseline Memnet, it captures both decor and food in the first task, mistakenly considering all sentiments are only describing food other than the ambience. Consequently, it judges that there is no attitude towards ambience. This example shows us the benefit of using the projection matrices to consider the interaction and distinction between targets. Otherwise the model might easily be confused by to which entity the sentiments are expressed.

From the third sentence, we can see how our model AttNets determines that the query target drink does not exist. The first module highlights words like ports (wine name), waitress, and the second module extracts negative sentiments but not know, which is usually used to describe people, rather than drink. Memnet almost has the same attention distribution as AttNets, but still fails to produce the correct prediction. Similar to the second case, projection matrices are important for models to figure out the common phrases used to describe different set of entities.

6 Conclusion

Attitude identification, a key problem of morden natural language processing, is concerned with detecting one or more target entities from text and then classifying the sentiment polarity towards them. This problem is conventionally approached by separately solving the two subtasks and usually separately treating each target, which fails to leverage the interplay between the two subtasks and the interaction among the target entities. Our study demonstrates that modeling these interactions in a carefully designed, end-to-end deep memory network significantly improves the accuracy of the two subtasks, target detection and polarity classification, and attutide indentification as a whole. Empirical experiments proves that this novel model outperforms models that do not consider the interactions between the two subtasks or among the targets, including conventional methods and the state-of-the-art deep learning models.

This work opens the exploration of interactions among subtasks and among contexts (in our case, targets) for sentiment analysis using an end-to-end deep learning architecture. Such an approach can be easily extended to handle other related problems in this domain, such as opinion summarization, multi-aspect sentiment analysis, and emotion classification. Designing specific network architecture to model deeper dependencies among targets is another intriguing future direction.


This work is partially supported by the National Science Foundation under grant numbers IIS-1054199 and SES-1131500.


  • [1] D. Bespalov, B. Bai, Y. Qi, and A. Shokoufandeh. Sentiment classification based on supervised latent n-gram analysis. In Proc. of CIKM, 2011.
  • [2] C.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines. TIST, 2011.
  • [3] N. V. Chawla. Data mining for imbalanced datasets: An overview. In Data mining and knowledge discovery handbook. 2005.
  • [4] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 2011.
  • [5] M.-C. De Marneffe, B. MacCartney, C. D. Manning, et al. Generating typed dependency parses from phrase structure parses. In Proc. of LREC, 2006.
  • [6] L. Dong, F. Wei, C. Tan, D. Tang, M. Zhou, and K. Xu. Adaptive recursive neural network for target-dependent twitter sentiment classification. In ACL, 2014.
  • [7] A. Faulkner. Automated classification of stance in student essays: An approach using stance target information and the wikipedia link-based measure. In FLAIRS, 2014.
  • [8] K. Gimpel, N. Schneider, B. O’Connor, D. Das, D. Mills, J. Eisenstein, M. Heilman, D. Yogatama, J. Flanigan, and N. A. Smith. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proc. of ACL, 2011.
  • [9] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proc. of ICML, 2011.
  • [10] K. S. Hasan and V. Ng. Stance classification of ideological debates: Data, models, features, and constraints. In IJCNLP, 2013.
  • [11] K. M. Hermann and P. Blunsom. The role of syntax in vector space models of compositional semantics. In ACL, 2013.
  • [12] M. Hu and B. Liu. Mining and summarizing customer reviews. In Proc. of SIGKDD, 2004.
  • [13] O. Irsoy and C. Cardie. Deep recursive neural networks for compositionality in language. In NIPS, 2014.
  • [14] L. Jiang, M. Yu, M. Zhou, X. Liu, and T. Zhao. Target-dependent twitter sentiment classification. In Proc. of ACL, 2011.
  • [15] Y. Kim. Convolutional neural networks for sentence classification. 2014.
  • [16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
  • [17] N. Kobayashi, R. Iida, K. Inui, and Y. Matsumoto. Opinion mining on the web by extracting subject-aspect-evaluation relations. In AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, 2006.
  • [18] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, 2014.
  • [19] F. Li, C. Han, M. Huang, X. Zhu, Y.-J. Xia, S. Zhang, and H. Yu. Structure-aware review mining and summarization. In Proc. of ACL, 2010.
  • [20] P. Liu, S. Joty, and H. Meng. Fine-grained opinion mining with recurrent neural networks and word embeddings. In EMNLP, 2015.
  • [21] D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani. Hierarchical multi-label conditional random fields for aspect-oriented opinion mining. In ECIR, 2014.
  • [22] Q. Mei, X. Ling, M. Wondra, H. Su, and C. Zhai. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proc. of WWW, 2007.
  • [23] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
  • [24] S. M. Mohammad, S. Kiritchenko, P. Sobhani, X. Zhu, and C. Cherry. Semeval-2016 task 6: Detecting stance in tweets. In Proc. of SemEval, 2016.
  • [25] S. M. Mohammad, P. Sobhani, and S. Kiritchenko. Stance and sentiment in tweets. arXiv preprint arXiv:1605.01655, 2016.
  • [26] S. M. Mohammad and P. D. Turney. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proc. of NAACL, 2010.
  • [27] B. Pang and L. Lee. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2008.
  • [28] M. Pontiki, D. Galanis, H. Papageorgiou, S. Manandhar, and I. Androutsopoulos. Semeval-2015 task 12: Aspect based sentiment analysis. In Proc. of SemEval, 2015.
  • [29] M. Pontiki, D. Galanis, J. Pavlopoulos, H. Papageorgiou, I. Androutsopoulos, and S. Manandhar. Semeval-2014 task 4: Aspect based sentiment analysis. In Proc. of SemEval, 2014.
  • [30] A. Popescu and M. Pennacchiotti. Dancing with the stars, nba games, politics: An exploration of twitter users’ response to events. In Proc. of ICWSM, 2011.
  • [31] A. Rajadesingan and H. Liu. Identifying users with opposing opinions in twitter debates. In Social Computing, Behavioral-Cultural Modeling and Prediction. 2014.
  • [32] C. Sauper and R. Barzilay. Automatic aggregation by joint modeling of aspects and values. JAIR, 2013.
  • [33] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. Semantic compositionality through recursive matrix-vector spaces. In Proc. of EMNLP-CoNLL, 2012.
  • [34] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP, 2013.
  • [35] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In NIPS, 2015.
  • [36] D. Tang, B. Qin, and T. Liu. Aspect level sentiment classification with deep memory network. In EMNLP, 2016.
  • [37] D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin. Learning sentiment-specific word embedding for twitter sentiment classification. In ACL, 2014.
  • [38] D.-T. Vo and Y. Zhang. Target-dependent twitter sentiment classification with rich automatic features. In IJCAI, 2015.
  • [39] M. A. Walker, P. Anand, R. Abbott, and R. Grant. Stance classification using dialogic properties of persuasion. In Proc. of NAACL, 2012.
  • [40] M. A. Walker, P. Anand, R. Abbott, J. E. F. Tree, C. Martell, and J. King. That is your evidence?: Classifying stance in online political debate. Decision Support Systems, 2012.
  • [41] H. Wang, Y. Lu, and C. Zhai. Latent aspect rating analysis without aspect keyword supervision. In Proc. of SIGKDD, 2011.
  • [42] S. Wang and C. D. Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proc. of ACL, 2012.
  • [43] T. Wilson, J. Wiebe, and P. Hoffmann. Recognizing contextual polarity in phrase-level sentiment analysis. In Proc. of HLT/EMNLP, 2005.
  • [44] Y. Yang and X. Liu. A re-examination of text categorization methods. In Proc. of SIGIR, 1999.
  • [45] M. Zhang, Y. Zhang, and D.-T. Vo. Gated neural networks for targeted sentiment analysis. In Proc. of AAAI, 2016.
  • [46] C. Zirn, M. Niepert, H. Stuckenschmidt, and M. Strube. Fine-grained sentiment analysis with structural features. In IJCNLP, 2011.