Semi-Supervised Affective Meaning Lexicon Expansion Using Semantic and Distributed Word Representations

03/28/2017 ∙ by Areej Alhothali, et al. ∙ University of Waterloo 0

In this paper, we propose an extension to graph-based sentiment lexicon induction methods by incorporating distributed and semantic word representations in building the similarity graph to expand a three-dimensional sentiment lexicon. We also implemented and evaluated the label propagation using four different word representations and similarity metrics. Our comprehensive evaluation of the four approaches was performed on a single data set, demonstrating that all four methods can generate a significant number of new sentiment assignments with high accuracy. The highest correlations (tau=0.51) and the lowest error (mean absolute error < 1.1 combining both the semantic and the distributional features, outperformed the distributional-based and semantic-based label-propagation models and approached a supervised algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sentiment analysis (SA) is a rapidly growing area of interest in natural language processing (NLP). Sentiment analysis is useful for a variety of important applications, such as recommendation system, virtual assistants, and health informatics. Much SA relies on lexicons mapping words to sentiment, which are either manually annotated or automatically generated from a small set of seed words. Many researchers and companies have explored methods of expanding and re-generating sentiment lexicons to reduce the cost of manual annotation and to compensate for the lack of existing annotated data and the dynamic and fluctuating nature of human emotion. However, most sentiment lexicon expansion methods attach a polarity value (i.e., negative, positive, or neutral) Stone et al. (1968) or real-valued one-dimensional scores Baccianella et al. (2010) to the words. It is well known; however, that one dimension is insufficient to adequately characterise the complexity of emotion Fontaine et al. (2007).

In a large set of cross-cultural studies in the 1950s, Osgood showed that concepts carried a culturally dependent, shared affective meaning that could be characterised to a great extent using three simple dimensions of evaluation (good versus bad), potency (powerful versus powerless), and activity (lively versus quiet) Osgood (1957). This semantic differential scale of evaluation, potency, and activity (EPA) is thought to represent universal and cross-cultural dimensions of affective meaning for words.

Based on this work, several three-dimensional sentiment lexicons have been manually labeled using surveys in different countries Heise (2010). Words in these lexicons are measured on a scale from (infinitely bad, powerless, or inactive) to (infinitely good, powerful, or lively) Berger and Zelditch (2002); Heise (2007).111The range is a historical convention In these surveys, participants are asked to rate identities (e.g., teacher, mother), behaviors (e.g., help, coach), adjectives (e.g., big, stubborn), institutions (e.g., hospital, school) or scenarios (e.g. combinations of identities, behaviours, adjectives and institutions) Heise (2010) on 5-item scales ranging from ”Infinitely negative (e.g., bad/powerless/inactive)” to ”Infinitely positive (e.g., good/powerful/active)”, which are then mapped to the scale. These manual annotation methods are labor-intensive, time-consuming and they produce a relatively small number of words.

In this paper, we utilize the semantic and distributed words representation to expand these three-dimensional sentiment lexicons in a semi-supervised fashion. We also evaluated four different approaches of computing the affinity matrix using a semantic (dictionary-based) features, singular value decomposition word embedding, neural word embedding word vector, and combining both neural word embedding and semantic features. The highest results were obtained using the semantic and neural word embedding model with a rank correlation score

on recreating two sentiment lexicon Warriner et al. (2013) and the General Inquirer Stone et al. (1968). The results also show that the highest rank correlation scores of the three dimension were for evaluation (E) while the lowest were for the potency (P). We also evaluated our induced EPA scores against some of the state-of-the-art methods in lexicon expansion, and our method shows an improvement in the correlation and F1 score over these algorithms.

Our contributions are fivefold: 1) this is the first work that extensively examines methods of multidimensional lexicon expansion (we compute the evaluation, potency, and activity (valence, dominance, and arousal) scores instead of only computing the evaluative factor ( valence), 2) we propose a label propagation algorithm that is built upon both the semantic and distributed word representations, 3) we performed a comprehensive evaluation of four algorithms against a manually annotated dataset as well as a supervised learning algorithm, 4) we sample seed words from the corpus or dictionary instead of using the commonly used fixed seed words (e.g., good, bad, happy, sad etc.), 5) we created a significantly large three-dimensional lexicon of

M words that could be leveraged by researchers in fields of sentiment analysis and social science.

Our proposed approaches 1) reduce the cost of manual annotation of sentiment lexicons; 2) integrate the affective meaning of today’s’ growing vocabulary (e.g., selfie, sexting), and 3) identify and incorporate the variance in attitudes towards words (e.g., same-sex marriage, abortion).

2 Related Work

The lexicon augmentation methods in this study were performed using variations of word representations and similarity metrics. This section provides a short background about the various vector space models that are used.

2.1 Statistical language modeling

Statistical language model (or vector space model (VSM)) is a distributional estimation of various language phenomena estimated by employing statistical techniques on real world data. Representing language phenomena in terms of parameters has proven to be useful in various natural language processing (NLP), speech recognition, and information retrieval (IR) tasks. To capture the semantic or syntactic properties and represent words as proximity in n-dimensional space, several VSMs have been proposed ranging from the simple

one-hot representation that regards words as atomic symbols of the co-occurrence with other words in a vocabulary to a neural word embedding that represents words in a dense and more compact representation.

The most commonly used word representation is the distributional word embeddings representing word based on the co-occurrence statistics with other words in a document or corpora Harris (1981); Firth (1957). The dimensionality of this sparse representation can be reduced using Singular value decomposition Eckart and Young (1936), Latent Semantic Analysis Landauer and Dumais (1997)

or Principal Component Analysis 

Jolliffe (2002).

Neural word embeddings

has recently gained a lot of attention in NLP and deep learning.

Neural word embeddings represent words in a low-dimensional, continuous space where each dimension corresponds to semantic or syntactic latent features. Similar to distributional word embeddings, neural word embeddings are usually based upon co-occurrence statistics, but they are more compact, less sensitive to data sparsity, and able to represent an exponential number of word clusters Bengio et al. (2006) Mikolov et al. (2010, 2011).

2.2 Acquisition of Sentiment Lexicon

Similar to other NLP tasks, sentiment lexicon induction methods can be achieved using two main approaches corpus-based or thesaurus-based. Turney and Littman Turney et al. (2003) proposed a corpus-based lexicon learning method by first applying (TF-IDF) weighting on matrices of words and context, using SVD, and then computing the semantic orientation with a set of seed words.

Thesaurus-based methods use the lexical relationship such as the depth of a concept in taxonomy tree Wu and Palmer (1994) or edge counting Collins and Quillian (1969) to build sentiment lexicons. Similar to Turney’s PMI approach  KAMPS (2004) they use WordNet based relatedness metric between words and given seed words.

Semi-supervised graph-based models that propagate information over lexical graphs have also been explored. The polarity-propagation or sense propagation algorithm induces sentiment polarity of unlabeled words given seed words (positive, negative) and the lexical relationships between them (e.g., word-net synonym, antonym) Strapparava and Valitutti (2004); Esuli and Sebastiani (2006). Some researchers have developed a weighted label propagation algorithm that propagates a continuous sentiment score from seed words to lexically related words Godbole et al. (2007). Velikovich et al. Velikovich et al. (2010)

proposed web-based graph propagation to elicit polarity lexicons. The graph is built upon a co-occurrence frequency matrix and cosine similarity (edges) between words and seed words (nodes). Then, both a positive and a negative polarity magnitude will be computed for each node in the graph which is equal to the sum over the max weighted path from every seed word (either positive or negative).

Several recent studies have utilized word embeddings to generate sentiment lexicons, such as a regression model that uses structured skip-gram 600 word embedding to create a Twitter-based sentiment lexicon Astudillo et al. (2015). Another study transforms dense word embedding vectors into a lower dimensional (ultra-dense) representation by training a two objective function gradient descent algorithms on lexicon resources Rothe et al. (2016). A recent study has also proposed a label propagation based model that uses word embedding, built using singular value decomposition (SVD) and PMI, to induce a domain-specific sentiment lexicon Hamilton et al. (2016).

Few studies have looked at multidimensional sentiment lexicon expansion. Kamps et al. KAMPS (2004) use a WordNet-based metric to elicit semantic orientation of adjectives. The generated lexicon was evaluated against the manually constructed list of Harvard IV-4 General Inquirer Stone et al. (1968). Kamps et al.’s work focuses only on adjectives and assigns them a binary value (either good or bad, potent or impotent, etc.). A three-dimensional sentiment lexicon was extended using a thesaurus-based label propagation algorithm based upon WordNet similarity Alhothali and Hoey (2015), and their results were compared against the Ontario dataset MacKinnon (2006).

3 Method

3.1 Graph-based Label-Propagation

Expanding sentiment lexicons using graph-based propagation algorithms was pursued previously and found to give higher accuracy in comparison with other standard methods Hu and Liu (2004); Andreevskaia and Bergler (2006); Rao and Ravichandran (2009). To evaluate the effectiveness of graph-based approaches in expanding multidimensional sentiment lexicons, in this paper, we use the label propagation algorithm Zhu and Ghahramani (2002); Zhou et al. (2004), combined with four methods for computing words vectors and word similarities. The label propagation algorithms rely on the idea of building a similarity graph with labeled (seed words/paradigm words) and unlabeled nodes (words). The labels or scores of the known nodes (words) are then propagated through the graph to the unlabeled nodes by repeatedly multiplying the weight matrix (affinity matrix) against the labels or scores vector.

Following the same principle, the graph label propagation algorithm in this paper: 1) creates a set of labeled and unlabeled data points or words where , V is all the words in the vocabulary set, is the word, and is the sentiment (E, P, A scores) attached to that word; 2) constructs an undirected weighted graph where is a set of vertices (words), edges, is an weight matrix ( where ; 3) Compute the random walk normalized Laplacian matrix (where is the degree matrix); 4) initializes the labeled nodes/words with their EPA values, and the unlabeled nodes/words with zeroes; 4) propagates the sentiment scores to adjacent nodes by computing (weighted by a factor ) and clamps the labeled nodes to their initial values after each iteration.

We implemented the label propagation algorithm using four different methods of computing affinity matrix and word representations. First, a semantic lexicon-based approach in which the graph is built based upon the semantic relationship between words (Semantic lexicon-based Label propagation or SLLP). Second, a distributional based approach in which vocabulary and weights come from co-occurrence statistics in a corpus (corpus-based label propagation or CLP). Third, a neural word embeddings method (neural word embedding label propagation or NWELP), and fourth, a combination of semantic and distributional methods (semantic neural word embedding label propagation or SNWELP). The following subsections describe these four different methods of label propagation.

3.1.1 Semantic Lexicon-based Label Propagation (SLLP)

The SLLP algorithm follows the general principle of the graph-based label propagation approach as described in the previous section, but the affinity matrix is computed using the semantic features obtained from semantic lexicons. Two semantic lexicons were used in this algorithm: WordNet dictionary (WN) Miller (1995) and the paraphrase database (PPDB) Ganitkevitch et al. (2013). The SLLP algorithm constructs the vocabulary from the words of the dictionaries and computes and normalizes the weight matrix using the synonyms relationship between words. The semantic-based similarity of any pair of words and in the vocabulary is calculated as follows:

(1)

3.1.2 Corpus-based Label Propagation (CLP)

Corpus-based label propagation (CLP) is one of the most commonly used methods for sentiment lexicon generation that uses the co-occurrence statistics aggregated from different corpora (news articles, Twitter, etc.) to build the similarity graph in the label propagation algorithms. We used an n-gram features from the signal media (SM) one million news articles dataset which contains

K blog articles and K news articles  Corney et al. (2016) and the North American News (NAN) text corpus Graff (1995) which has K articles from a variety of news sources.

The co-occurrence matrix

was computed on a window size of four words. Bigrams with stop words, words less than three letters, proper nouns, non-alpha words, and the bigrams that do not occur more than ten times were filtered out. These heuristics reduce the set into

k and k, for SM and NAN corpora, respectively. We constructed the word vectors by computing the smoothed positive point-wise mutual information(SPPMI) Levy et al. (2015) of the co-occurrence matrix . This smoothing technique reduces the PMI’s bias towards rare words and found to improve the performance of NLP tasks Levy et al. (2015).

(2)

where

is the empirical co-occurrence probability of a pair of words

and and and are the marginal probability of and the smoothed marginal probability of , respectively. We use as it is found to give better results Levy et al. (2015) Mikolov et al. (2013) and we also experiment with the unsmoothed PPMI. The SPPMI matrix is then factorized with truncated Singular Value Decomposition (SVD) Eckart and Young (1936) as follows:

(3)

We take the top rows of as the word representation or word vector (we used k=300):

(4)

The affinity matrix is then computed as:

(5)

3.1.3 Neural Word Embeddings Label-propagation (NWELP)

This method uses word embeddings (word-vectors) that capture syntactic and semantic properties. We use two pre-trained word embedding models that are trained on co-occurrence statistics. We used skip-gram word vector (SG) Mikolov et al. (2013) that is trained on a skip-gram model of co-occurrence statistics aggregated from Google News dataset and Global vector for word representation(GloVe) Pennington et al. (2014) which have been trained on co-occurrence statistics aggregated from Wikipedia. The vocabulary in this algorithm is all words in the word embeddings set (we filtered out non-alpha words and words that contain digits), and the affinity matrix is computed using the cosine similarity (Equation 5) between word vectors (each is a 300 dimensional vector).

3.1.4 Semantic and Neural Word Embeddings Label-propagation (SNWELP)

To improve the results of the NWELP algorithm, we propose the SNWELP, a model that combines both semantic and distributional information obtained from the neural word embedding models and a semantic lexicon (a dictionary). The SNWELP algorithm constructs the affinity matrix using the neural word embeddings features (SG or GloVe) and semantic features obtained from a semantic lexicon (WN or PPDB). In this case, is intersection between the words in lexicon and the word in the filtered embeddings set, is the averaged cosine similarity scores (Equation 5) of the neural and the semantic word representations (Equation 1).

3.2 Sampling Methods

Choosing the labeled words (also called paradigm or seed words) in the graph-based label propagation methods is one of the critical factors. We used two methods: 1) fixed seed sets (fixed-paradigms), and 2) words sampled from the vocabularies used in the label propagation algorithm (vocabulary-paradigms). The fixed-paradigms set was chosen from Osgood et al’s Osgood (1957) research as shown in Table 1 while the vocabulary-paradigms set was randomly sampled from the corpus’ vocabulary for words with the highest and lowest EPA values (words with E, P or A or ). The objective is to use words at extremes of each dimension E,P, and A, as paradigm words in order to propagate these highly influencing EPA throughout the graph. The seed words contribute to no more than of all words in each algorithm. We tested with the fixed-paradigms sets, but the results of the vocabulary-paradigms were significantly better.

EPA Seed words
E+={good, nice, excellent, positive, warm, correct, superior}
E-={bad, awful, nasty, negative, cold, wrong, inferior}
P+={powerful, strong, potent, dominant, big, forceful ,hard}
P-={powerless, weak, impotent, small, incapable, hopeless, soft}
A+={active, fast, noisy, lively, energetic, dynamic, quick, vital}
A-={quiet, clam, inactive, slow, stagnant, inoperative, passive}
Table 1: Osgood’s fixed seed words (+ positive word and - negative words)

3.3 Evaluation Metrics

To evaluate the effectiveness of the algorithm in generating a multidimensional sentiment lexicon, we chose the most recent manually-annotated affective dictionary Warriner et al. (2013) as baseline. We use the  Warriner et al. (2013) dictionary in the lexicon induction procedure by sampling the paradigm words from it and we compare the generated lexicon against it. We randomly divided the Warriner et al. (2013) affective dictionary (original-EPA) into EPA-training (third of the set equal to words) and EPA-testing (two-thirds of the set equal to words). The seed words for all algorithms are sampled from the EPA-training set only, and all results are presented on the EPA-testing set.

The EPA scores of  Warriner et al. (2013) initially range and we rescaled them to to follow the same EPA scale used in the other lexicons we have considered Heise (2010). The scale is the standard scale used by most of the researchers in the sociology field who study or measure individuals’ emotions towards terms.

Four evaluation metrics were used to compare the induced EPA (

EPA-induced) against the manually annotated EPA (EPA-testing): mean absolute error (MAE), Kendall rank correlation, F1-binary (positive and negative), and F1-ternary (positive, neutral, and negative). We used F1-binary to evaluate the binary classification performance of the model (positive and negative ) and similar to most recently proposed studies in the field Hamilton et al. (2016), we computed F1-ternary to measure the ternary classification accuracy: positive , neutral , and negative . To calculate the F1-ternary, we used the class-mass normalization (CMN) methods Zhu et al. (2003) that rescale the predicted label () for a point by incorporating the class prior as follows :

where is the label mass normalization which is equal to where

is the prior probability of a label

(computed from the labeled data), and is the estimated weight of label over the unlabeled sets. This scaling method is known to improve the results in comparison with the typical decision function

3.4 Baseline and State-of-the-art Comparison

We compared our induced results against some of the standard state-of-art algorithms for inducing the valence (evaluation scores). We implemented the PMI-IR algorithm proposed by Turney et al. (2003) which estimates the sentiment orientation (either positive or negative) of a word by computing the difference between the strength of the word associations with positive paradigm words and with negative paradigm words using the co-occurrence statistics aggregated from search engines’ results. We also compare our results against the reported results of Rothe et al. (2016)’s orthogonal transformation of word vectors, and a label spreading algorithm trained on ( a domain-specific) SVD word vector model Hamilton et al. (2016). We also experimented with the retrofitted word vector model that improves the neural word embedding vectors using semantic features obtained from the lexical resources (WN, PPDB) Faruqui et al. (2014).

To make a fair comparison, we implemented our label propagation algorithm and the retrofitted word vector approach Faruqui et al. (2014) to recreate the General Inquirer lexicon Stone et al. (1966) with valence score from Warriner et al. (2013) lexicon to compare our results to Hamilton et al. (2016) and Rothe et al. (2016).We also ignored the neutral class and used the same seed set used by Hamilton et al. (2016) and other researchers in the field. We also compare all the results against the EPA scores obtained from a supervised learning algorithm. We trained a support vector regression (SVR) model on a co-occurrence statistics model derived from the skip-gram word embedding model (SG) Mikolov et al. (2013) and sentiment lexicon resource Warriner et al. (2013). The SVR model uses RBF kernel with , and for training and is trained on the full training set (EPA-training).

Method Corpus W F1-binary F1-ternary MAE


E P A E P A E P A E P A



CLP
SM 5,109 0.219 0.0263 0.162 0.53 0.44 0.56 0.42 0.45 0.44 1.1 1.09 0.85

NAN 4,653 0.122 0.060 0.084 0.51 0.54 0.54 0.50 0.42 0.45 1.3 1.0 0.99

SLLP
WN 4,801 0.388 0.244 0.329 0.72 0.83 0.73 0.65 0.60 0.75 0.91 0.79 0.71
PPDB 4,621 0.391 0.181 0.309 0.73 0.76 0.71 0.62 0.60 0.65 0.92 0.89 0.79
  NWELP SG 8,072 0.437 0.283 0.350 0.70 0.80 0.67 0.69 0.65 0.79* 0.84 1.08 0.88
GloVe 4,867 0.430 0.113 0.357 0.73 0.81 0.70 0.68 0.64 0.78 1.09 1.07 0.84
SNWELP PPDB+GloVe 4,867 0.434 0.209 0.360 0.74 0.81 0.70 0.68 0.64 0.77 1.09 1.07 0.84

WN+GloVe 4,867 0.445 0.220 0.366 0.75 0.82 0.71 0.68 0.64 0.78 1.07 1.05 0.84
PPDB+SG 4,818 0.510 0.284 0.459 0.76 0.80 0.75 0.68 0.64 0.78 1.10 0.97 0.84
WN+SG 5,367 0.510 0.291 0.461 0.76 0.80 0.75 0.68 0.64 0.78 1.10 0.95 0.83


SL
WESVR 8,271 0.628* 0.422* 0.500* 0.83* 0.84* 0.78* 0.72* 0.65* 0.68 0.60* 0.60* 0.56*




Table 2: The results of the label propagation algorithms and the supervised learning (SL) method (support vector regression (WESVR)) using the sampled seed words in comparison with the ground truth EPA values (Method= the algorithm used for lexicon induction, W= the number of the induced words that has label in the dictionary, = Kendall’s correlation, F1-binary= F1 measure of the binary classification, F1-ternary= F1 scores of the ternary classification, MAE=Mean Absolute Error). The highest scores of the label propagation algorithms are in a boldface. The highest scores of all the algorithm are in boldface*.

4 Results

In this section, we present the results of comparing the induced EPA scores using the label propagation algorithms against their corresponding values in the EPA-testing. As shown in Table 2, using SVD word embeddings in the CLP algorithm generated the lowest ranking correlation and the highest error rate (MAE) in comparison with the other label propagation methods. The results of comparing the induced EPA scores against their true values in the testing set show that the MAE ranged between 0.99 and 1.3 and the ranking correlation 222The p-value for all the reported scores are less than 0.001 was less than 0.2 using cosine similarity and hard clamping ( ) assumption. We also experimented with the unsmoothed point-wise mutual information (PPMI), but there was not a significant difference between the smooth and the unsmoothed PMI. We also tried different dimension of the SVD word vector k=100 and k=300, but there was no significant difference between them as well.

The results of the SLLP algorithm that uses the semantic features obtained from either WN or PPDB lexicons generated a total of K words, where K words are in the testing set (EPA-testing). The results of comparing the induced EPA scores to their corresponding values in the testing set (EPA-testing) show that the MAE less than , F1-binary greater than , F1-ternary greater than or equal to , and the ranking correlation suggesting that there is a reasonable degree of agreement between the induced EPA score using dictionary-based features and the manually labeled EPA values.

The correlations scores show that neural word embedding label propagating NWELP outperformed the semantic based, and corpus-based label propagation algorithms, as shown in Table 2. The MAE and F1-scores of the semantic-based and neural word embedding label propagation were close. The MAE of the neural word embedding ranged from 0.84 to 1.09, F-1 scores were between 0.67 and 0.80, and ranged from to . Comparing the results of the two pre-trained neural word embedding shows that the skip-gram based (SG) model performed better than (GloVe). We experimented with different thresholds (, , and ) of the cosine similarities and the result using different threshold varied a lot in respect to the number of induced words and the accuracy. Higher thresholds provided more accurate results and less noise in the results, but with less number of induced words. The reported results in Table 2 and 3 are using cosine similarity threshold equal to since the adjacency matrix of both SG and Glove contain negative values. Combining the semantic and neural word embedding features improved the results with ranged between and and MAE for the evaluation scores (E). The results of the supervised SVR model significantly outperformed the results obtained from the semi-supervised method with equal to 0.628, 0.422 and 0.500 for E, P, and A, respectively, F-1 scores equal to 0.83, 0.84, and 0.78, and MAE close to 0.6, but the results of the SNWELP were comparable.

Comparing the results across the different affective dimensions (E,P, and A) shows that the rank correlation of comparing the potency (P) to their counterpart scores in testing set was low in comparison with the scores of evaluation (E) and activity (A) in both the semi-supervised algorithms and the supervised algorithm. While the rank correlation of the evaluation (E) scores were the highest in all the algorithms which indicate that words with similar word embeddings have a similar evaluation score. Table 4 shows some of the induced EPA scores and their corresponding values in Warriner et al. (2013) dataset. The table also shows some examples of the same words and their induced EPA scores using different word representations. Comparing our induced evaluation scores (E) with some of the state-of-the-art methods, as shown in Table 3, indicates that our label propagation algorithms significantly performed better than Turney and Littman (2002)’s unsupervised method. The result also shows that semantic neural word embedding (SNWELP) model outperformed Rothe et al. (2016) and Hamilton et al. (2016) approaches. Also, the neural word embedding and semantic neural word embedding algorithms perform better than the label propagation that uses the retrofitted word vector (the reported results are of the improved skip-gram model (SG) using semantic features obtained from wordnet (WN)) Faruqui et al. (2014).

Method F1-ternary ACC
SNWELP (SG+WN) 0.51 0.67 0.94
(Hamilton et al., 2016) 0.50 0.62 0.93
NWELP (SG) 0.48 0.67 0.94
(Rothe et al., 2016) 0.44 0.59 0.91
(Faruqui et al., 2014) 0.40 0.62 0.84
(Turney and Littman, 2002) 0.14 0.47 0.55
Table 3: The results of comparing evaluation (E) of the General Inquirer induced lexicon using the pre-trained Neural Word Embeddings label propagation (NWELP) and Semantic Neural Word Embeddings label propagation (SNWELP) and fixed seed words with the results reported by the state of the are results in method in lexicon induction ( = Kendall’s correlation, ACC= the binary accuracy, F1= the ternary F-measure)

Word
Method Induced EPA True EPA
injustice WN [-1.9, 0.3, -1.7] [-2.7, 1.6, -1.86]
injustice GloVe [-1.3, 1.4 , -1.8] [-2.7, 1.6, -1.86]
injustice GloVe+WN [-1.4, 0.2, -1.3] [-2.7, 1.6, -1.86]
injustice SG+ WN [-1.9, 0.3, -1.7 ]* [-2.7, 1.6, -1.86]
evil PPDB [-1.3 , 0.05, -1.1] [-2.9, 0.7, -1.5]
evil GLoVe [-2.1, 2.5, -3.1] [-2.9, 0.7, -1.5]
evil GLoVe+PPDB [-1.7 , 0.08, -1.2] [-2.9, 0.7, -1.5]
evil SG+PPDB [-2.1, 0.1, -1.5] [-2.9, 0.7, -1.5]
successful SG [ 2.15, 0.04, 1.6] [2.97, 0.09, 2.9]
successful SG+PPDB [ 2.5, -0.6, 2.0] [2.97, 0.09, 2.9]


Table 4: Some example of the induced EPA and their EPA ratings from Original-EPA-lexicon and the induced EPA values using label propagation and different word representations WN=wordnet, parahprese-database=PPDB, SG =skip-gram word vector, and GLoVe= the global vector for word representation. The starred example * show no change after adding the neural word vector features.

5 Discussion

Sentiment analysis is a feature engineering problem in which sentiment lexicons play a significant role in improving the model accuracy. One of the challenges of sentiment analysis is the increasing number of new words and terms in the social media or news resources (e.g., selfie, sexting, photobomb,etc.) that do not have a sentiment score attached to them. Also, there is a need to measure the variance in human attitudes towards some terms over a period of time (e.g., homosexuality, abortion) and to explore other dimensions of humans’ emotions. To overcome these limitations, reduce the cost of manual annotation, and increase the number of the annotated terms, we propose an extension and an evaluation of corpus and thesaurus-based algorithms to automatically induce a three-dimensional sentiment lexicon.

Similar to any NLP applications, the vast majority of the work in lexicon induction uses distributional word representations (corpus-based statistics). In this study, the corpus-based label propagation (CLP) algorithm generated the least accurate results. Also, despite the viability of distributional word representations, exactly what syntactic and semantic information it captures is hard to determine, and not clear whether it is relevant for sentiment at all.

The semantic lexicon-based label propagation (SLLP) was better than CLP. However, there are also some limitations of using the dictionary based approach 1) the synonym relationship can only be computed between words of the same part of speech, 2) the dictionary has a limited number of words and does not include words that are used in the social media and internet in general.

Only one study have experimented with neural word embedding label propagation to expand the one-dimensional sentiment lexicon Hamilton et al. (2016) with only reporting the result of using SVD word embedding model. In our study, we report the results of using different neural word embedding models. The results show that our neural word embedding model performed better than the SVD word vector approach. These findings require further analysis and assessment on different corpora.

The results of combining both the semantic and neural word embedding (NWELP) was better than the corpus-based or semantic lexicon-based algorithms. The semantic neural word embedding provided a higher rank correlation scores and a slighting lower MAE in comparison with the semantic lexicon and neural word embedding-based algorithms. The results of the semantic neural label propagation algorithm are also comparable with those generated using a supervised learning algorithm (SVR) trained on word embeddings and a sentiment lexicon. Using the semi-supervised algorithm; however, does not require a large training dataset and allows to annotate the words independently from the previously human-coded lexica.

6 Conclusion

In this study, we propose an extension to the graph-based lexicon induction algorithms to expand sentiment lexicons and explore other dimensions of sentiments. This study to the best of our knowledge is the first work that expands a multi-dimension sentiment lexicon and the first to incorporates both the semantic and neural word representations in the label propagation algorithm. We also provided an extensive evaluation of label propagation algorithms using a variety of word representations that have been found to provide higher accuracy in many NLP tasks in comparison with other standard methods. The results show that the word semantic neural word embedding label propagation generates the highest correlations compared with the corpus-based, semantic lexicon-based, and neural word embedding algorithms.

References

  • Alhothali and Hoey (2015) Areej Alhothali and Jesse Hoey. 2015. Good news or bad news: Using affect control theory to analyze readers reaction towards news articles. In Proc. Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT). Denver, CO.
  • Andreevskaia and Bergler (2006) Alina Andreevskaia and Sabine Bergler. 2006. Semantic tag extraction from wordnet glosses. In Proceedings of 5th International Conference on Language Resources and Evaluation (LREC’06). Citeseer.
  • Astudillo et al. (2015) Ramon F Astudillo, Silvio Amir, Wang Ling, Bruno Martins, Mário Silva, Isabel Trancoso, and Rua Alves Redol. 2015. Inesc-id: A regression model for large scale twitter sentiment lexicon induction. SemEval-2015 page 613.
  • Baccianella et al. (2010) Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC. volume 10, pages 2200–2204.
  • Bengio et al. (2006) Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In

    Innovations in Machine Learning

    , Springer, pages 137–186.
  • Berger and Zelditch (2002) Joseph Berger and Morris Zelditch. 2002. New Directions in Contemporary Sociological Theories. Rowman & Littlefield.
  • Collins and Quillian (1969) Allan M Collins and M Ross Quillian. 1969. Retrieval time from semantic memory. Journal of verbal learning and verbal behavior 8(2):240–247.
  • Corney et al. (2016) David Corney, Dyaa Albakour, Miguel Martinez, and Samir Moussa. 2016. What do a million news articles look like? In Proceedings of the First International Workshop on Recent Trends in News Information Retrieval co-located with 38th European Conference on Information Retrieval (ECIR 2016), Padua, Italy, March 20, 2016.. pages 42–47.
  • Eckart and Young (1936) Carl Eckart and Gale Young. 1936. The approximation of one matrix by another of lower rank. Psychometrika 1(3):211–218.
  • Esuli and Sebastiani (2006) Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of LREC. volume 6, pages 417–422.
  • Faruqui et al. (2014) Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166 .
  • Firth (1957) John Rupert Firth. 1957. A synopsis of linguistic theory, 1930-1955.
  • Fontaine et al. (2007) Johnny RJ Fontaine, Klaus R Scherer, Etienne B Roesch, and Phoebe C Ellsworth. 2007. The world of emotions is not two-dimensional. Psychological science 18(12):1050–1057.
  • Ganitkevitch et al. (2013) Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In HLT-NAACL. pages 758–764.
  • Godbole et al. (2007) Namrata Godbole, Manja Srinivasaiah, and Steven Skiena. 2007. Large-scale sentiment analysis for news and blogs. ICWSM 7.
  • Graff (1995) David Graff. 1995. North american news text corpus.
  • Hamilton et al. (2016) William L Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. arXiv preprint arXiv:1606.02820 .
  • Harris (1981) Zellig S Harris. 1981. Distributional structure. Springer.
  • Heise (2007) David R Heise. 2007. Expressive order: Confirming sentiments in social actions. Springer.
  • Heise (2010) David R. Heise. 2010. Surveying Cultures: Discovering Shared Conceptions and Sentiments. Wiley.
  • Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 168–177.
  • Jolliffe (2002) Ian Jolliffe. 2002. Principal component analysis. Wiley Online Library.
  • KAMPS (2004) Jaap KAMPS. 2004. Using wordnet to measure semantic orientation of adjectives. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004). pages 1115–1118.
  • Landauer and Dumais (1997) Thomas K Landauer and Susan T Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review 104(2):211.
  • Levy et al. (2015) Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3:211–225.
  • MacKinnon (2006) Neil J. MacKinnon. 2006. Mean affective ratings of 2, 294 concepts by guelph university undergraduates, ontario, canada. In 2001-3 [Computer file].
  • Mikolov et al. (2010) Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. pages 1045–1048.
  • Mikolov et al. (2011) Tomas Mikolov, Stefan Kombrink, Lukas Burget, JH Cernocky, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, pages 5528–5531.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119.
  • Miller (1995) George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39–41.
  • Osgood (1957) Charles Egerton Osgood. 1957. The measurement of meaning, volume 47. University of Illinois Press.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532–43.
  • Rao and Ravichandran (2009) Delip Rao and Deepak Ravichandran. 2009. Semi-supervised polarity lexicon induction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 675–682.
  • Rothe et al. (2016) Sascha Rothe, Sebastian Ebert, and Hinrich Schütze. 2016. Ultradense word embeddings by orthogonal transformation. arXiv preprint arXiv:1602.07572 .
  • Stone et al. (1968) Philip Stone, Dexter C Dunphy, Marshall S Smith, and DM Ogilvie. 1968. The general inquirer: A computer approach to content analysis. Journal of Regional Science 8(1):113–116.
  • Stone et al. (1966) Philip J Stone, Dexter C Dunphy, and Marshall S Smith. 1966. The general inquirer: A computer approach to content analysis. .
  • Strapparava and Valitutti (2004) Carlo Strapparava and Alessandro Valitutti. 2004. Wordnet affect: an affective extension of wordnet. In LREC. volume 4, pages 1083–1086.
  • Turney and Littman (2002) Peter Turney and Michael L Littman. 2002. Unsupervised learning of semantic orientation from a hundred-billion-word corpus .
  • Turney et al. (2003) Peter Turney, Michael L Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining independent modules to solve multiple-choice synonym and analogy problems .
  • Velikovich et al. (2010) Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 777–785.
  • Warriner et al. (2013) Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods 45(4):1191–1207.
  • Wu and Palmer (1994) Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 133–138.
  • Zhou et al. (2004) Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf. 2004. Learning with local and global consistency. Advances in neural information processing systems 16(16):321–328.
  • Zhu and Ghahramani (2002) Xiaojin Zhu and Zoubin Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. Technical report, Technical Report CMU-CALD-02-107, Carnegie Mellon University.
  • Zhu et al. (2003) Xiaojin Zhu, Zoubin Ghahramani, John Lafferty, et al. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In ICML. volume 3, pages 912–919.