Sentiment Analysis by Joint Learning of Word Embeddings and Classifier

08/14/2017 ∙ by Prathusha Kameswara Sarma, et al. ∙ 0

Word embeddings are representations of individual words of a text document in a vector space and they are often use- ful for performing natural language pro- cessing tasks. Current state of the art al- gorithms for learning word embeddings learn vector representations from large corpora of text documents in an unsu- pervised fashion. This paper introduces SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via word embeddings. SWESA leverages document label infor- mation to learn vector representations of words from a modest corpus of text doc- uments by solving an optimization prob- lem that minimizes a cost function with respect to both word embeddings as well as classification accuracy. Analysis re- veals that SWESA provides an efficient way of estimating the dimension of the word embeddings that are to be learned. Experiments on several real world data sets show that SWESA has superior per- formance when compared to previously suggested approaches to word embeddings and sentiment analysis tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Representing words in a vector space allows quantification of relationships among words using distance or angle measures. Such vector representations for words are useful in performing several Natural Language Processing (NLP) tasks. The general idea when learning word embeddings is to estimate the underlying probability distribution function of a word from a given corpus of text documents. Most probabilistic models for learning semantic word embeddings, of which neural network based models such as the current state of the art word2vec algorithm and its derivatives 

Mikolov et al. (2013b); Le and Mikolov (2014) are unsupervised and perform well when trained on billions of text documents. Results from the word2vec algorithm show that in addition to capturing precise syntactic and semantic information, word embeddings obtained from these algorithms demonstrate linear structure particularly well suited for performing analogy tasks.

This paper focuses on sentiment analysis for problem domains where obtaining large amounts of data is problematic. A typical example is that of data obtained from discussion forums that are part of digital health intervention treatments. Such treatments have demonstrated effectiveness in substance use disorders Mohr et al. (2013); Moore et al. (2011)

. Textual data obtained from these discussion forums is rich in sentiments such as determination, pleasure, anger, fear etc. The goal of this intervention treatment is to prevent relapse in users via timely intervention facilitated by human moderators and machine learning algorithms. Though forum moderators can monitor and provide support when participants are struggling, considerable labor is involved in reviewing and deciding the risk level of each text message.

By analyzing textual data for sentiment, efficient algorithms can be developed for predicting relapse. However, challenges with this data are, (i) the amount of unlabeled data is small, as the number of active users are modest, the number of posts they make in the on-line forum is modest (on the order of a few thousand) (ii) obtaining labels for this data is hard, as they need human moderated expertise to judge if a certain post is ‘positive/benign’ which implies that the individual is doing fine or ‘negative/threat’ implying that the individual is vulnerable and is likely to relapse soon.

The contributions of this paper are two fold. First, this paper introduces the Supervised Word Embedding for Sentiment Analysis (SWESA) algorithm (Section 3). This is an iterative algorithm that minimizes a cost function for both a classifier and word embeddings under unit norm constraint on the word vectors. SWESA uses document labels for learning word embeddings. Using document lables overcomes the problem of small-size training data and allows learning of meaningful word embeddings. In contrast, state of the art algorithms like word2vec use large amounts of training data and learn word embeddings in an unsupervised fashion.

Second, word embeddings learned via SWESA are polarity aware as demonstrated via extensive experiments on standard data sets like Imdb, Yelp, Amazon etc (Section 4). For example, ‘Awful/Good’ is the antonym pair returned via SWESA as opposed to ‘Awful/Could’ obtained via word2vec. Such polarity aware word embeddings are suitable to perform word antonym tasks. In addition, SWESA has significant improvement over the state-of-the-art in word embeddings when used in a sentiment analysis framework.

Section 2 presents related work and Section 5 concludes this work.

2 Related Work

This work is related to two important areas in NLP each with a vast amount of related literature. In keeping with space constraints, this section briefly discusses major contributions from both areas.

Word vector representations: Earliest vector representation of words were via Vector Space Models (VSM) Turney et al. (2010). A popular example of the VSM is Latent Semantic Indexing (LSI) Deerwester et al. (1990) that works on a matrix of co-occurence counts such as the term frequency-inverse document frequency (tf-idf) to learn word embeddings. Variants of the LSI involve different measures for co-occurence such as the square root of word counts Rohde et al. (2006), logarithms Dumais (2004) etc. The more recent state-of-the-art are neural network based language models that use the weights of the neural network as internal representation of a word. Neural network models are rich with initial contributions by Rumelhart et al. (1988). Successful modern incarnations of neural network models lead to the word2vec algorithm Mikolov et al. (2013a) which uses energy-based techniques and GloVe which uses matrix factorization techniques Pennington et al. (2014). The main idea behind word2vec is to learn vector representations of words so that they maximize the probability of contiguous tuples occurring in the corpus while at the same time minimizing the probability of random -tuples. Furthermore word2vec paper posits a probabilistic model based on the sum of dot products between a word and the nearby words. This model has successfully produced efficient word vector embeddings that exhibit linear properties desirable for use in applications such as word analogy tasks.

Latent variable probabilistic models Blei et al. (2003); Blei (2012) and extensions have also been used for word embeddings. All of the above methods learn word embeddings in an unsupervised fashion. However, using labeled data can often help with learning sentiment-aware word embeddings more appropriate to the corpus at hand. Such word embeddings can be used in sentiment analysis tasks.

Sentiment Analysis: In their work Maas et al. (2011) propose a probabilistic model that captures semantic similarities among words across documents. This model leverages document label information to improve word vectors to better capture sentiment of the contexts in which these words occur. The probabilistic model used by is similar to that in Latent Dirichlet Allocation (LDA) Blei et al. (2003) in which each document is modeled as a mixture of latent topics. In Maas et al. (2011), word probabilities in a document are modeled directly assuming a given topic.

A supervised neural network based model has been proposed by Tang et al. (2014)

to classify Twitter data. The proposed algorithm learns sentiment specific word vectors, from tweets making use of emoticons in text to guide sentiment of words used in the text instead of annotated sentiment labels. The Recursive Neural Tensor Network (RNTN) proposed by 

Socher et al. (2013) classifies sentiment of text of varying length. To learn sentiment from long text, this model exploits compositionality in text by converting input text into the Sentiment Treebank format with annotated sentiment labels. The Sentiment Treebank is based on a data set introduced by Pang and Lee Pang and Lee (2005). This model performs particularly well on longer texts by exploiting compositionality as opposed to a regular bag of features approach.

Notation: Throughout this paper we shall denote word vectors as , for , where indicates the size of the vocabulary. The matrix of word vectors is where . The classifier to be learned is represented by , weights of word vectors in document are contained in the vector , and the document label of the document is indicated by , document is represented as . Let be the matrix containing weight vectors and vector be the vector containing document labels.

3 Supervised Word Vectors for Sentiment Analysis

Given a collection of documents with binary sentiments respectively, the aim is to learn a classifier that when given a new, previously unseen document can accurately estimate the sentiment of the document. There could be class imbalance in the training data and the algorithm should explicitly account for such a class imbalance. This problem is approached by introducing a new algorithm called SWESA. SWESA simultaneously learns word vector embeddings and a classifier, by making use of document polarity/sentiment labels. Representation of documents within SWESA is motivated by the fact that in short texts like “I am sad”, “I am happy”, polarity of the sentence hinges on the words “sad” and “happy”. As a result, by learning polarity aware word embeddings, good vector representations for documents can be achieved. For instance, in the above example, the distance between the vectors and would capture dissimilarities in sentiment of these two documents while at the same time reflecting similarities in sentence structure.

Text documents in this framework are represented as a weighted linear combination of words in a given vocabulary. Weights can be either the term frequencies (tf) of words within each document or term frequency-inverse document frequency (tf-idf). Weights provided as input to SWESA for experiments described in Section 4 are term frequencies. This weighting scheme is chosen to mimic the concept of local context used in the word2vec family of algorithms. Global co-occurrence information can be leveraged by using tf-idf for weighting words in documents. Such an approach in not entirely unheard off in sentiment analysis tasks, where word embeddings are considered as features for a classification algorithm Labutov and Lipson (2013).

SWESA aims to find vector representations for words, and by extension of text documents such that applying a nonlinear transformation to the product results in a binary label indicating the polarity of document. Mathematically we assume that,

(1)

for some function . In order to solve for , a regularized negative likelihood minimization problem is solved. This optimization problem is as (1) and can be solved as a minimization problem with objective function,

(2)

This optimization problem can now be written as

(3)
s.t.

The vector is a vector of weights, corresponding to the different words, for document . As mentioned previously, for testing SWESA term frequencies of different words in a certain document are used in . is the regularization parameter for the classifier , is the cost associated with misclassifying a document from the positive class and

is the cost associated with misclassifying a document from the negative class. Following the heuristic suggested by 

Lin et al. (2002), and , where is the number of positive documents in the corpus and is the number of negative documents in the corpus. This scheme is particularly useful when dealing with data sets with imbalanced classes. When using a balanced data set . Sentiment in a given document is captured by the document label , which in this framework is a binary label that capture sentiments such as ‘positive/negative’ or ‘threatening/benign’ depending on the data set.

The unit norm constraint in the optimization problem shown in (3) is enforced on word embeddings to discourage degenerate solutions of . For example in the absence of this constraint, the optimal is typically a vector of zeros. Note that this optimization problem is bi-convex, but it is not jointly convex in the optimization variables. Algorithm 1 shows the algorithm that we use to solve the optimization problem in (3). This algorithm is an alternating minimization procedure that initializes the word embedding matrix with and then alternates between minimizing the objective function w.r.t. the weight vector and the word embeddings .

0:  , , , , , , Labels: , Iterations: ,
1:  Initialize .
2:  for  do
3:     Solve .
4:     Solve .
5:  end for
6:  Return
Algorithm 1 Supervised Word Embeddings for Sentiment Analysis (SWESA)

3.1 Logistic regression model

The optimization problem in (2

) assumes a certain probability model and minimizes the negative log-likelihood under norm constraints. While, the specific goal of the user might dictate an appropriate choice of probabilistic model, for a large class of classification tasks such as sentiment analysis, the logistic regression model is widely used. In this section it is assumed that the probability model of interest is the logistic model. Under this assumption the minimization problem in Step 3 of Algorithm 

1 is a standard logistic regression problem 111A bias term, can be trivially introduced in the logistic regression model.. Many specialized solvers have been devised for this problem and in this implementation of SWESA, a standard off-the-shelf solver available in the scikit-learn package in Python is used. In order to solve the optimization problem in line 4 of Algorithm 1

a projected stochastic gradient descent (SGD) with suffix averaging 

Rakhlin et al. (2011) is used. In suffix averaging the last few iterates obtained during stochastic gradient descent are averaged. Suffix averaging guarantees that the noise in the iterates is reduced and has been shown to achieve almost optimal rates of convergence for minimization of strongly convex functions. For experiments in Section 4 we set .

Gradient updates for given are of the form

(4)

Algorithm 2 implements the SGD algorithm (with stochastic gradients instead of full gradients) for solving the optimization problem in step 4 of Algorithm 1.

0:  , Labels: , Iterations: N, step size: , and suffix parameter: .
1:  Randomly shuffle the dataset.
2:  for  do
3:     Set if , if .
4:     
5:     
6:     
7:  end for
8:  Return
Algorithm 2 Stochastic Gradient Descent for

3.2 Initialization of

Two different initialization procedures are used to obtain . The first method uses the Latent Semantic Analysis Dumais (2004) procedure to form the matrix of word vectors from the given corpus of text documents. The second method uses the word2vec algorithm to form word vector matrix from the corpus.

3.3 Dimensionality of Word Vectors

In most previous literature on learning word embeddings the choice of is ad-hoc and usually fixed to some small number. In this paper, it is suggested that the spectrum of matrix be used to determine . Typically, is required to be large enough so as to capture the intricacies in the data but at the same time small enough to avoid over fitting. In order to find the best , the effective rank of the matrix is calculated. The effective rank Ganti et al. (2015) of a matrix is defined as the smallest , such that the best rank-k approximation, , of the matrix , satisfies

(5)

Here indicates the Frobenius norm. This notion of effective rank has the intuitive meaning that the energy in the the singular value of the matrix is small relative to the entire spectrum. To demonstrate that such choices of are good a simple synthetic experiment is performed. SWESA is run on a synthetic data set of 400 text documents split into 5 pairs of training and testing data sets. A polarized vocabulary of 40 words is built, comprising of 15 positive, 15 negative and 10 neutral words. A text document is assigned a negative label if at least 70 of the words in the document are negative. Similarly, a text document is labeled positive if at least 70 of the words in the document are positive. This synthetic data set is unbalanced, with 10 positive documents and the rest negative.

Figure 1: This figure shows versus on the left, and the average precision versus dimension plot for the learned word vectors on the right.

The matrix here is a matrix of term frequencies. Since the data set is relatively noise free, a value of . As can be seen from Figure 1, the effective rank at this choice of is . For this value of the average precision is . This demonstrates, the fact that the above definition of effective rank provides us with a good mechanism to pick good values of .

3.4 Convergence of SWESA and comparison to other algorithms

At a high level, SWESA can be seen as a variation of the supervised dictionary learning problem (SDL). Within SDL Mairal et al. (2009) given labeled data , and unlabeled part of the data that lies in a dimensional space, the goal is to learn a dictionary of size , such that each where is a sparse encoding of w.r.t. dictionary . Further, the label is generated by a linear classifier w.r.t , i.e. . The learning problem is to estimate the dictionary, the codes of each data point and the classifier. SWESA can be roughly mapped to the SDL by considering dictionary of size , where each column corresponds to a word embedding.

However there are three main differences between SDL and SWESA. (i) In SDL the input is a labeled dataset where each data point is already represented as a vector. This allows, a definition of reconstruction error that is used in algorithms designed for SDL. In contrast, SWESA has labeled unstructured data which does not have a direct vector representation and the aim is to learn vector representations for such data. As a result the notion of reconstruction error used in SDL does not apply to SWESA and hence the optimization formulation used is significantly different from the one used in SDL (ii) In SDL sparse encoding of each data point is to be learned, whereas in SWESA this sparse encoding is considered to be known and is proportional to the number of times a word appears in the document. (iii) Finally, in SDL the classifier is a high-dimensional vector that acts on the latent codes. For SDL and other problems such as matrix completion Jain et al. (2013), convergence properties of alternating minimization have been studied. While the current analysis techniques might not apply to SWESA, due to the above mentioned differences, we conjecture that similar ideas might be useful for convergence analysis of SWESA.

Standard methods like Naive Bayes use one-hot encoding for words and hence fails to capture semantic relationships between words. In contrast, SWESA learns word embeddings that capture polarity. Neural network models learn complicated functions on the data, which makes them a poor algorithmic tool in the presence of limited data.

4 Experimental Evaluation and Results

To examine its effectiveness, SWESA is compared against the following baselines,

  1. Naive Bayes classifier:

    The classic Naive Bayes classifier for sentiment classification based on the Bag-of-Words features, optimized in NLTK toolkit in Python is used.

  2. Recursive Neural Tensor Network (RNTN): RNTN proposed by Socher et al. (2013) learns compositionality form text of varying length and performs classification in a supervised fashion with fine grained sentiment labels. Since SWESA is aimed at binary classification, RNTN is also used in a binary classification framework. RNTN is shown to perform better than the previously proposed Recursive Auto Encoder (RAE) by Socher et al. (2011) and hence SWESA is not compared against RAE.

  3. Two-Step (TS): This baseline is introduced to test the effectiveness of unsupervised embedding algorithms like LSA and word2vec as features for document sentiment classification . Two-step performs the following two steps to perform sentiment analysis. (i) Learn the unigram word embeddings in an unsupervised fashion and use them to obtain document embeddings via weighted linear combination. (ii) Use the obtained document embeddings to learn a logistic regression classifier for sentiment analysis.

SWESA is compared against RNTN and not Sentiment-Specific Word Embeddings (SSEW) which is a competing neural network model developed by Tang et al. (2014) for three main reasons, i) the SSEW algorithm was developed specifically for sentiment analysis on twitter data and uses emoticons in the tweets as sentiment labels. In contrast in the data sets considered here emoticons are usually absent. Moreover, the structure and language characteristics of Twitter data is unlike the datasets of interest in this work, making SSEW unsuitable Blitzer et al. (2007). ii) the RNTN algorithm can handle texts of varying length. In contrast SSEW is limited to tweets which are always less than 140 characters long. iii) a well developed, readily usable code is available for RNTN, but not for SSEW.

Experimental Set Up:

SWESA is tested against the baselines on four data sets, some of which are balanced and some of which are unbalanced. Each data set is split into 10 train-test data pairs. In the case of the unbalanced data set the ratio of classes is held consistent across training and test data pairs. The hyperparameter

is tuned on the training data via cross validation. Similarly, i.e the number of iterations of SWESA until convergence, is determined by running the experiment on the training data for a range of values of . The value of beyond which there is no significant change i.e the difference between consequent values of the objective function is , is selected. Since real data sets are noisier compared to the synthetic data set used in Section 3.3, is selected. Average Area Under the Curve (AUC) and precision scores from all 10 test data sets are reported. Precision (pr) is calculated as the ratio of number of true positives (tp) to the number of true positives (tp) + false positives (fp) i.e, . Area Under the Curve (AUC) is obtained by applying the trapezoidal rule to calculate area from the ROC curve. All data sets used in Section 4.1 are tokenized and non textual characters are removed. Since the data sets are small, unlike in  Le and Mikolov (2014) all words tokenized from the data sets are retained in the vocabulary. word2vec is trained using hyperparameters similar to the default values in Le and Mikolov (2014). Similarly, default hyperparameters reported by Socher et al. (2013) are used for training the RNTN.

4.1 Results on sentiment analysis task.

Figures 1(a) and 1(b) show the average precision and average AUC222AUC scores are not available for RNTN since it is not possible to determine prediction probabilities from this model. scores respectively of all baselines and SWESA on four data sets of which three balanced data sets (Yelp, Amazon and IMDB) consist of 1000 reviews of food, products and movies respectively. Each review is labeled as ‘Positive’ or ‘Negative’. These data sets are available for download from the UCI repository M.Lichman (2013). The CHESS data set consists of 2500 documents obtained from a mobile phone based intervention treatment that provides services for recovery maintenance and relapse prediction for alcohol addicts Gustafson et al. (2014). This is an unbalanced data set where the number of documents suggestive of relapse () in a user are far outnumbered by users discussing their sobriety. Each message is labeled as ‘threat’ suggesting a relapse risk and ‘not threat’ indicating well being. This data set is proprietary of the study conducted by Gustafson et al. (2014).

(a) This figure shows the average precision scores obtained by baselines and SWESA on all four data sets. Each error bar represents the average precision score obtained by running all algorithms on 10 testing sets.
(b) This figure shows the average AUC scores obtained by baselines and SWESA on all four data sets. Each error bar represents the average AUC score obtained by running all algorithms on 10 testing sets.

Both the neural network based baselines RNTN (sentiment classification) and word2vec (word embeddings) based Two-Step baseline perform weakly as opposed to SWESA and other baselines. This observation is consistent with behavior of neural network based algorithms on small data sets. Despite being pre trained on the Wikipedia corpus, word2vec derived embeddings used in Two-Step fail to perform as well as SWESA consistently on all four data sets, achieving a maximum precision of 0.7109 on the Amazon data set. On the same data set the two different initializations of SWESA achieve precision scores of 0.8036 and 0.8031. Failure of Two-step with word2vec can be attributed to the lack of supervision during training and also to the disparity in training and test data.

SWESA learns meaningful embeddings from text as opposed to methods like Naive Bayes where word frequencies are used to obtain one hot encodings for documents. Hence embeddings learned via SWESA are better suited for sentiment analysis. This can be seen via the average precision and AUC of 0.7254 and 0.6116 achieved by NB on the Amazon data set as opposed to the average precision and AUC of 0.8031 and 0.8754 achieved by SWESA with the LSA initialization. As seen from figures 1(a),1(b)

this behavior is consistent across the other balanced data sets and the CHESS data set with imbalanced classes. To highlight the qualitative performance of SWESA, cosine similarity between document representations via SWESA and Two-Step are evaluated. Top three reviews obtained from Two-Step and SWESA most similar to the sample review

“First off the reception sucks, I have never had more than 2 bars, ever.” are,

  1. SWESA

    • “The worst phone I’ve ever had…. Only had it for a few months.”

    • “I recently had problems where I could not stay connected for more than 10 minutes before being disconnected.”

    • “Then I exchanged for the same phone, even that had the same problem.”

  2. Two-Step

    • “But it does get better reception and clarity than any phone I’ve had before.”

    • “none of the new ones have ever quite worked properly ”

    • “In the span of an hour, I had two people exclaim “Whoa - is that the new phone on TV?!?””

A similar analysis is performed and holds consistently across the other 3 data sets and is available in the supplemental material. This shows that SWESA propagates document level polarity onto word embeddings which helps in sentiment analysis.

Failure of pre-trained RNTN: Neural network based RNTNs work well when trained on large data sets. In their work Socher et al. (2013) train RNTN on the Pang and Lee data set Pang and Lee (2005) of 10k movie reviews. Table shows the average precision obtained by pre-trained RNTN on all data sets. Note that difference in average precision scores between pre-trained RNTN and SWESA is considerably small given that pre-trained RNTN is trained on a dataset that is times the size of training data for SWESA. This observation is best illustrated on the Amazon dataset where average precision of pre-trained RNTN are approximately and of SWESA is .

Average Precision STD
Amazon 0.8284 0.0067
IMDB 0.8388 0.0070
Yelp 0.8331 0.0111
Table 1: This table shows the average precision obtained by pre-trained RNTN on three balanced data sets.

However, pre-trained RNTN does particularly poorly on the CHESS data set. While it is know that difference in language structure and vocabulary of training and test data introduces some error Blitzer et al. (2007), pre-trained RNTN fails to classify most messages in the CHESS data set. Also, pre-trained RNTN does a poor job in accounting for class imbalance in the CHESS data set because of which precision scores in the messages that do get classified are extremely low.

Polarity of word embeddings. The objective of SWESA is to perform effective sentiment analysis by learning embeddings from text documents with sentiment labels. As a consequence of which word level polarity is preserved in vector space. That is, given words ‘Good,’‘fair’ and ‘Awful,’ the antonym pair ‘Good/Awful’ is determined by calculating the cosine similarity between and . Figure 2 shows a small sample of word embeddings learned on the Amazon data set by SWESA and word2vec. The cosine similarity (angle) between the most dissimilar words is calculated and owing to the assumptions on word embeddings, words are depicted as points on the unit circle. From figure 2 it is evident that a supervised algorithm like SWESA projects document level polarity onto word level embeddings while an unsupervised algorithm like word2vec that learns embeddings of words via virtue of word co-occurrences will fail to embed polarity. It is important to notice that SWESA learns word polarities by using document polarities, and these word polarities are useful for antonym tasks. Unlike classical antonym tasks where examples of known antonym pairs are provided, in our setup no such pairs are provided, and yet SWESA was able to do a good job discovering antonym pairs. For example the most dissimilar word to given word ‘Excellent’ is ‘Poor’ when learned via SWESA as opposed to ‘Work’ when learned via word2vec. Thus, word antonym pairs can be obtained by calculating cosine similarities. These examples illustrate that SWESA captures sentiment polarity at word embedding level despite limited data.

Figure 2: This figure depicts word embeddings on a unit circle. Most dissimilar word pairs are plotted based on the cosine angle between the respective word embeddings learned via SWESA and word2vec.

5 Conclusions and Future work

This paper introduces SWESA, a novel iterative algorithm that simultaneously learns polarity aware word embeddings and a classifier to perform sentiment analysis in a supervised learning framework. SWESA overcomes the limitations posed by small sized data sets to neural network based learning algorithms. Assumptions on the structure of word embeddings within SWESA preserve structural properties desirable of embeddings typically obtained via neural network based embedding algorithms. As future work, it is proposed that the geometric interpretation of the semantic relationships between word embeddings be used to mine additional semantic relationships between words and concepts in data.

References

  • Blei (2012) David M Blei. 2012. Probabilistic topic models. Communications of the ACM 55(4):77–84.
  • Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3(Jan):993–1022.
  • Blitzer et al. (2007) John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447.
  • Deerwester et al. (1990) Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science 41(6):391.
  • Dumais (2004) Susan T Dumais. 2004. Latent semantic analysis. Annual review of information science and technology 38(1):188–230.
  • Ganti et al. (2015) Ravi Sastry Ganti, Laura Balzano, and Rebecca Willett. 2015. Matrix completion under monotonic single index models. In Advances in Neural Information Processing Systems. pages 1873–1881.
  • Gustafson et al. (2014) David H Gustafson, Fiona M McTavish, Ming-Yuan Chih, Amy K Atwood, Roberta A Johnson, Michael G Boyle, Michael S Levy, Hilary Driscoll, Steven M Chisholm, Lisa Dillenburg, et al. 2014. A smartphone application to support recovery from alcoholism: a randomized clinical trial. JAMA psychiatry 71(5):566–572.
  • Jain et al. (2013) Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. 2013. Low-rank matrix completion using alternating minimization. In

    Proceedings of the forty-fifth annual ACM symposium on Theory of computing

    . ACM, pages 665–674.
  • Labutov and Lipson (2013) Igor Labutov and Hod Lipson. 2013. Re-embedding words. In ACL (2). pages 489–493.
  • Le and Mikolov (2014) Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML. volume 14, pages 1188–1196.
  • Lin et al. (2002) Yi Lin, Yoonkyung Lee, and Grace Wahba. 2002. Support vector machines for classification in nonstandard situations. Machine learning 46(1-3):191–202.
  • Maas et al. (2011) Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 142–150.
  • Mairal et al. (2009) Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis R Bach. 2009. Supervised dictionary learning. In Advances in neural information processing systems. pages 1033–1040.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .
  • Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119.
  • M.Lichman (2013) M.Lichman. 2013. UCI machine learning repository. http://archives.ics.uci.edu/mi.
  • Mohr et al. (2013) David C Mohr, Michelle Nicole Burns, Stephen M Schueller, Gregory Clarke, and Michael Klinkman. 2013. Behavioral intervention technologies: evidence review and recommendations for future research in mental health. General hospital psychiatry 35(4):332–338.
  • Moore et al. (2011) Brent A Moore, Tera Fazzino, Brian Garnet, Christopher J Cutter, and Declan T Barry. 2011. Computer-based interventions for drug use disorders: a systematic review. Journal of substance abuse treatment 40(3):215–223.
  • Pang and Lee (2005) Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 115–124.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532–43.
  • Rakhlin et al. (2011) Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. 2011. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647 .
  • Rohde et al. (2006) Douglas LT Rohde, Laura M Gonnerman, and David C Plaut. 2006. An improved model of semantic similarity based on lexical co-occurrence. Communications of the ACM 8:627–633.
  • Rumelhart et al. (1988) David E Rumelhart, James L McClelland, PDP Research Group, et al. 1988. Parallel distributed processing, volume 1. IEEE.
  • Socher et al. (2011) Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011.

    Semi-supervised recursive autoencoders for predicting sentiment distributions.

    In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 151–161.
  • Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP). volume 1631, page 1642.
  • Tang et al. (2014) Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In ACL (1). pages 1555–1565.
  • Turney et al. (2010) Peter D Turney, Patrick Pantel, et al. 2010. From frequency to meaning: Vector space models of semantics.

    Journal of artificial intelligence research

    37(1):141–188.