Compositional Distributional Semantics with Long Short Term Memory

03/09/2015 ∙ by Phong Le, et al. ∙ University of Amsterdam 0

We are proposing an extension of the recursive neural network that makes use of a variant of the long short-term memory architecture. The extension allows information low in parse trees to be stored in a memory register (the `memory cell') and used much later higher up in the parse tree. This provides a solution to the vanishing gradient problem and allows the network to capture long range dependencies. Experimental results show that our composition outperformed the traditional neural-network composition on the Stanford Sentiment Treebank.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Moving from lexical to compositional semantics in vector-based semantics requires answers to two difficult questions: (i) what is the nature of the composition functions (given that the lambda calculus for variable binding is no longer applicable), and (ii) how do we learn the parameters of those functions (if they have any) from data? A number of classes of functions have been proposed in answer to the first question, including simple linear functions like vector addition

[Mitchell and Lapata2009], non-linear functions like those defined by multi-layer neural networks [Socher et al.2010]

, and vector matrix multiplication and tensor linear mapping

[Baroni et al.2013]

. The matrix and tensor-based functions have the advantage of allowing a relatively straightforward comparison with formal semantics, but the fact that multi-layer neural networks with non-linear activation functions like sigmoid can approximate any continuous function

[Cybenko1989] already make them an attractive choice.

In trying to answer the second question, the advantages of approaches based on neural network architectures, such as the recursive neural network (RNN) model [Socher et al.2013b]

and the convolutional neural network model

[Kalchbrenner et al.2014]

, are even clearer. Models in this paradigm can take advantage of general learning procedures based on back-propagation, and with the rise of ‘deep learning’, of a variety of efficient algorithms and tricks to further improve training.

Since the first success of the RNN model [Socher et al.2011b] in constituent parsing, two classes of extensions have been proposed. One class is to enhance its compositionality by using tensor product [Socher et al.2013b] or concatenating RNNs horizontally to make a deeper net [Irsoy and Cardie2014]

. The other is to extend its topology in order to fulfill a wider range of tasks, like le2014the for dependency parsing and paulus2014global for context-dependence sentiment analysis.

Our proposal in this paper is an extension of the RNN model to improve compositionality. Our motivation is that, like training recurrent neural networks, training RNNs on deep trees can suffer from the vanishing gradient problem

[Hochreiter et al.2001], i.e., that errors propagated back to the leaf nodes shrink exponentially. In addition, information sent from a leaf node to the root can be obscured if the path between them is long, thus leading to the problem how to capture long range dependencies. We therefore borrow the long short-term memory (LSTM) architecture [Hochreiter and Schmidhuber1997] from recurrent neural network research to tackle those two problems. The main idea is to allow information low in a parse tree to be stored in a memory cell and used much later higher up in the parse tree, by recursively adding up all memory into memory cells in a bottom-up manner. In this way, errors propagated back through structure do not vanish. And information from leaf nodes is still (loosely) preserved and can be used directly at any higher nodes in the hierarchy. We then apply this composition to sentiment analysis. Experimental results show that the new composition works better than the traditional neural-network-based composition.

The outline of the rest of the paper is as follows. We first, in Section 2, give a brief background on neural networks, including the multi-layer neural network, recursive neural network, recurrent neural network, and LSTM. We then propose the LSTM for recursive neural networks in Section 3, and its application to sentiment analysis in Section 4. Section 5 shows our experiments.

2 Background

2.1 Multi-layer Neural Network

Figure 1:

Multi-layer neural network (left) and Recursive neural network (right). Bias vectors are removed for the simplicity.

Figure 2: Activation functions: , , .

In a multi-layer neural network (MLN), neurons are organized in layers (see Figure 

1-left). A neuron in layer receives signal from neurons in layer and transmits its output to neurons in layer . 111This is a simplified definition. In practice, any layer can connect to layer . The computation is given by

where real vector contains the activations of the neurons in layer ; is the matrix of weights of connections from layer to layer ; is the vector of biases of the neurons in layer ; is an activation function, e.g. sigmoid, tanh, or softsign (see Figure 2).

For classification tasks, we put a softmax

layer on the top of the network, and compute the probability of assigning a class

to an input by

(1)

where ; is the set of all possible classes; are a weight matrix and a bias vector.

Training an MLN is to minimize an objective function where is the parameter set (for classification, is often a negative log likelihood). Thanks to the back-propagation algorithm [Rumelhart et al.1988], the gradient is efficiently computed; the gradient descent method thus is used to minimize .

2.2 Recursive Neural Network

A recursive neural network (RNN) [Goller and Küchler1996] is an MLN where, given a tree structure, we recursively apply the same weight matrices at each inner node in a bottom-up manner. In order to see how an RNN works, consider the following example. Assume that there is a constituent with parse tree (Figure 1-right), and that are the vectorial representations of the three words , and , respectively. We use a neural network which consists of a weight matrix for left children and a weight matrix for right children to compute the vector for a parent node in a bottom up manner. Thus, we compute

(2)

where is a bias vector and is an activation function. Having computed , we can then move one level up in the hierarchy and compute :

(3)

This process is continued until we reach the root node.

Like training an MLN, training an RNN uses the gradient descent method to minimize an objective function . The gradient is efficiently computed thanks to the back-propagation through structure algorithm [Goller and Küchler1996].

The RNN model and its extensions have been employed successfully to solve a wide range of problems: from parsing (constituent parsing [Socher et al.2013a], dependency parsing [Le and Zuidema2014a]) to classification (e.g. sentiment analysis [Socher et al.2013b, Irsoy and Cardie2014], paraphrase detection [Socher et al.2011a], semantic role labelling [Le and Zuidema2014b]).

2.3 Recurrent Networks and Long Short-Term Memory

Figure 3: Simple recurrent neural network (left) and long short-term memory (right). Bias vectors are removed for the simplicity.

A neural network is recurrent

if it has at least one directed ring in its structure. In the natural language processing field, the simple recurrent neural network (SRN) proposed by elman_finding_1990 (see Figure 

3-left) and its extensions are used to tackle sequence-related problems, such as machine translation [Sutskever et al.2014] and language modelling [Mikolov et al.2010].

In an SRN, an input is fed to the network at each time . The hidden layer , which has activation right before comes in, plays a role as a memory store capturing the whole history . When comes in, the hidden layer updates its activation by

where , , are weight matrices and a bias vector; is an activation.

This network model thus, in theory, can be used to estimate probabilities conditioning on long histories. And computing gradients is efficient thanks to the back-propagation through time algorithm

[Werbos1990]. In practice, however, training recurrent neural networks with the gradient descent method is challenging because gradients (, is the objective function at time ) vanish quickly after a few back-propagation steps [Hochreiter et al.2001]. In addition, it is difficult to capture long range dependencies, i.e. the output at time depends on some inputs that happened very long time ago. One solution for this, proposed by hochreiter1997long and enhanced by gers2001long, is long short-term memory (LSTM).

Long Short-Term Memory

The main idea of the LSTM architecture is to maintain a memory of all inputs the hidden layer received over time, by adding up all (gated) inputs to the hidden layer through time to a memory cell. In this way, errors propagated back through time do not vanish and even inputs received a very long time ago are still (approximately) preserved and can play a role in computing the output of the network (see the illustration in [Chapter 4]graves2012supervised).

An LSTM cell (see Figure 3-right) consists of a memory cell , an input gate , a forget gate , an output gate . Computations occur in this cell are given below

where

is the sigmoid function;

, , are the outputs (i.e. activations) of the corresponding gates; is the state of the memory cell; denotes the element-wise multiplication operator; ’s and ’s are weight matrices and bias vectors.

Because the sigmoid function has the output range (see Figure 2), activations of those gates can be seen as normalized weights. Therefore, intuitively, the network can learn to use the input gate to decide when to memorize information, and similarly learn to use the output gate to decide when to access that memory. The forget gate, finally, is to reset the memory.

3 Long Short-Term Memory in RNNs

Figure 4: Long short-term memory for recursive neural network.

In this section, we propose an extension of the LSTM for the RNN model (see Figure 4). A key feature of the RNN is to hierarchically combine information from two children to compute the parent vector; the idea in this section is to extend the LSTM such that not only the output from each of the children is used, but also the contents of their memory cells. This way, the network has the option to store information when processing constituents low in the parse tree, and make it available later on when it is processing constituents high in the parse tree.

For the simplicity 222Extending our LSTM for -ary trees is trivial. , we assume that the parent node has two children and . The LSTM at thus has two input gates and two forget gates for the two children. Computations occuring in this LSTM are:

where and are the output and the state of the memory cell at node ; , , , , are the activations of the corresponding gates; ’s and ’s are weight matrices and bias vectors; and is an activation function.

Intuitively, the input gate lets the LSTM at the parent node decide how important the output at the -th child is. If it is important, the input gate will have an activation close to . Moreover, the LSTM controls, using the forget gate , the degree to which information from the memory of the -th child should be added to its memory.

Using one input gate and one forget gate for each child makes the LSTM flexible in storing memory and computing composition. For instance, in a complex sentence containing a main clause and a dependent clause it could be beneficial if only information about the main clause is passed on to higher levels. This can be achieved by having low values for the input gate and the forget gate for the child node that covers the dependent clause, and high values for the gates corresponding to the child node covering (a part of) the main clause. More interestingly, this LSTM can even allow a child to contribute to composition by activating the corresponding input gate, but ignore the child’s memory by deactivating the corresponding forget gate. This happens when the information given by the child is temporarily important only.

4 LSTM-RNN model for Sentiment Analysis 333The LSTM architecture was already applied to the sentiment analysis task, for instance in the model proposed at http://deeplearning.net/tutorial/lstm.html. Independently from and concurrently with our work, tai2015improved and zhu2015long have developed very similar models applying LTSM to RNNs.

Figure 5: The RNN model (left) and LSTM-RNN model (right) for sentiment analysis.

In this section, we introduce a model using the proposed LSTM for sentiment analysis. Our model, named LSTM-RNN, is an extension of the traditional RNN model (see Section 2.2) where traditional composition function ’s in Equations 23 are replaced by our proposed LSTM (see Figure 5

). On top of the node covering a phrase/word, if its sentiment class (e.g. positive, negative, or neutral) is available, we put a softmax layer (see Equation 

1) to compute the probability of assigning a class to it.

The vector representations of words (i.e. word embeddings) can be initialized randomly, or pre-trained. The memory of any leaf node , i.e. , is 0.

Similarly to irsoy2014deep, we ‘untie’ leaf nodes and inner nodes: we use one weight matrix set for leaf nodes and another set for inner nodes. Hence, let and respectively be the dimensions of word embeddings (leaf nodes) and vector representations of phrases (inner nodes), all weight matrices from a leaf node to an inner node have size , and all weight matrices from an inner node to another inner node have size .

Training

Training this model is to minimize the following objective function, which is the cross-entropy over training sentence set plus an L2-norm regularization term

where is the parameter set, is the sentiment class of phrase , is the vector representation at the node covering , is computed by the softmax function, and is the regularization parameter. Like training an RNN, we use the mini-batch gradient descent method to minimize , where the gradient is computed efficiently thanks to the back-propagation through structure [Goller and Küchler1996]. We use the AdaGrad method [Duchi et al.2011] to automatically update the learning rate for each parameter.

4.1 Complexity

We analyse the complexities of the RNN and LSTM-RNN models in the forward phase, i.e. computing vector representations for inner nodes and classification probabilities. The complexities in the backward phase, i.e. computing gradients , can be analysed similarly.

The complexities of the two models are dominated by the matrix-vector multiplications that are carried out. Since the number of sentiment classes is very small (5 or 2 in our experiments) compared to and , we only consider those matrix-vector multiplications which are for computing vector representations at the inner nodes.

For a sentence consisting of

words, assuming that its parse tree is binarized without any unary branch (as in the data set we use in our experiments), there are

inner nodes, links from leaf nodes to inner nodes, and links from inner nodes to other inner nodes. The complexity of RNN in the forward phase is thus approximately

The complexity of LSTM-RNN is approximately

If , the complexity of LSTM-RNN is about 8.5 times higher than the complexity of RNN.

In our experiments, this difference is not a problem because training and evaluating the LSTM-RNN model is very fast: it took us, on a single core of a modern computer, about 10 minutes to train the model () on 8544 sentences, and about 2 seconds to evaluate it on 2210 sentences.

5 Experiments

5.1 Dataset

We used the Stanford Sentiment Treebank555http://nlp.stanford.edu/sentiment/treebank.html [Socher et al.2013b] which consists of 5-way fine-grained sentiment labels (very negative, negative, neutral, positive, very positive) for 215,154 phrases of 11,855 sentences. The standard splitting is also given: 8544 sentences for training, 1101 for development, and 2210 for testing. The average sentence length is 19.1.

In addition, the treebank also supports binary sentiment (positive, negative) classification by removing neutral labels, leading to: 6920 sentences for training, 872 for development, and 1821 for testing.

The evaluation metric is the accuracy, given by

.

5.2 LSTM-RNN vs. RNN

Setting

We initialized the word vectors by the 100-D GloVe666http://nlp.stanford.edu/projects/GloVe/ word embeddings [Pennington et al.2014], which were trained on a 6B-word corpus. The initial values for a weight matrix were uniformly sampled from the symmetric interval where is the number of total input units.

For each model (RNN and LSTM-RNN), we tested three activation functions: softmax, tanh, and softsign, leading to six sub-models. Tuning those sub-models on the development set, we chose the dimensions of vector representations at inner nodes , learning rate , regularization parameter , and mini-batch-size 5.

On each task, we run each sub-model 10 times. Each time, we trained the sub-model in 20 epochs and selected the network achieving the highest accuracy on the development set.

Figure 6: Boxplots of accuracies of 10 runs of RNN and LSTM-RNN on the test set in the fine-grained classification task. (LSTM stands for LSTM-RNN.)
Figure 7: Boxplot of accuracies of 10 runs of RNN and LSTM-RNN on the test set in the binary classification task. (LSTM stands for LSTM-RNN.)

Results

Figure 6 and  7 show the statistics of the accuracies of the final networks on the test set in the fine-grained classification task and binary classification task, respectively.

It can be seen that LSTM-RNN outperformed RNN when using the tanh or softsign activation functions. With the sigmoid activation function, the difference is not so clear, but it seems that LSTM-RNN performed slightly better. Tanh-LSTM-RNN and softsign-LSTM-RNN have the highest median accuracies (48.1 and 86.4) in the fine-grained classification task and in the binary classification task, respectively.

With the RNN model, it is surprising to see that the sigmoid function performed well, comparably with the other two functions in the fine-grained task, and even better than the softsign function in the binary task, given that it was not often chosen in recent work. The softsign function, which was shown to work better than tanh for deep networks [Glorot and Bengio2010], however, did not yield improvements in this experiment.

With the LSTM-RNN model, the tanh function, in general, worked best whereas the sigmoid function was the worst. This result agrees with the common choice for this activation function for the LSTM architecture in recurrent network research [Gers2001, Sutskever et al.2014].

5.3 Compared against other Models

We compare LSTM-RNN (using tanh) in the previous experiment against existing models: Naive Bayes with bag of bigram features (BiNB), Recursive neural tensor network (RNTN)

[Socher et al.2013b], Convolutional neural network (CNN) [Kim2014], Dynamic convolutional neural network (DCNN) [Kalchbrenner et al.2014], paragraph vectors (PV) [Le and Mikolov2014], and Deep RNN (DRNN) [Irsoy and Cardie2014].

Among them, BiNB is the only one that is not a neural net model. RNTN and DRNN are two extensions of RNN. Whereas RNTN, which keeps the structure of the RNN, uses both matrix-vector multiplication and tensor product for the composition purpose, DRNN makes the net deeper by concatenating more than one RNNs horizontally. CNN, DCNN and PV do not rely on syntactic trees. CNN uses a convolutional layer and a max-pooling layer to handle sequences with different lengths. DCNN is hierarchical in the sense that it stacks more than one convolutional layers with k-max pooling layers in between. In PV, a sentence (or document) is represented as an input vector to predict which words appear in it.

Table 1 (above the dashed line) shows the accuracies of those models. The accuracies of LSTM-RNN was taken from the network achieving the highest performance out of 10 runs on the development set. The accuracies of the other models are copied from the corresponding papers. LSTM-RNN clearly performed worse than DCNN, PV, DRNN in both tasks, and worse than CNN in the binary task.

Model Fine-grained Binary
BiNB 41.9 83.1
RNTN 45.7 85.4
CNN 48.0 88.1
DCNN 48.5 86.8
PV 48.7 87.8
DRNN 49.8 86.6
with GloVe-100D
LSTM-RNN 48.0 86.2
with GloVe-300D
LSTM-RNN 49.9 88.0
Table 1: Accuracies of the (tanh) LSTM-RNN compared with other models.

5.4 Toward State-of-the-art with Better Word Embeddings

We focus on DRNN, which is the most similar to LSTM-RNN among those four models CNN, DCNN, PV and DRNN. In fact, from the results reported in [Table 1a]irsoy2014deep, LSTM-RNN performed on par777 irsoy2014deep used the 300-D word2vec word embeddings trained on a 100B-word corpus whereas we used the 100-D GloVe word embeddings trained on a 6B-word corpus. From the fact that they achieved the accuracy 46.1 with an RNN () in the fine-grained task and 85.3 in the binary task, and our implementation of RNN () performed worse (see Table 6 and 7), we conclude that the 100-D GloVe word embeddings are not more suitable than the 300-D word2vec word embeddings. with their 1-layer-DRNN () using dropout, which is to randomly remove some neurons during training. Dropout is a powerful technique to train neural networks, not only because it plays a role as a strong regularization method to prohibit neurons co-adapting, but it is also considered a technique to efficiently make an ensemble of a large number of shared weight neural networks [Srivastava et al.2014]. Thanks to dropout, irsoy2014deep boosted the accuracy of a 3-layer-DRNN with from 46.06 to 49.5 in the fine-grained task.

In the second experiment, we tried to boost the accuracy of the LSTM-RNN model. Inspired by irsoy2014deep, we tried using dropout and better word embeddings. Dropout, however, did not work with LSTM. The reason might be that dropout corrupted its memory, thus making training more difficult. Better word embeddings did pay off, however. We used 300-D GloVe word embeddings trained on a 840B-word corpus. Testing on the development set, we chose the same values for the hyper-parameters as in the first experiment, except setting learning rate . We also run the model 10 times and selected the networks getting the highest accuracies on the development set. Table 1 (below the dashed line) shows the results. Using the 300-D GloVe word embeddings was very helpful: LSTM-RNN performed on par with DRNN in the fine-grained task, and with CNN in the binary task. Therefore, taking into account both tasks, LSTM-RNN with the 300-D GloVe word embeddings outperformed all other models.

6 Discussion and Conclusion

We proposed a new composition method for the recursive neural network (RNN) model by extending the long short-term memory (LSTM) architecture which is widely used in recurrent neural network research.

The question is why LSTM-RNN performed better than the traditional RNN. Here, based on the fact that the LSTM for RNNs should work very similarly to LSTM for recurrent neural networks, we borrow the argument given in [Section 3.2]bengio2013advances to answer the question. Bengio explains that the LSTM behaves like low-pass filter “hence they can be used to focus certain units on different frequency regions of the data”. This suggests that the LSTM plays a role as a lossy compressor which is to keep global information by focusing on low frequency regions and remove noise by ignoring high frequency regions. So composition in this case could be seen as compression, like the recursive auto-encoder (RAE) [Socher et al.2011a]. Because pre-training an RNN as an RAE can boost the overall performance [Socher et al.2011a, Socher et al.2011c], seeing LSTM as a compressor might explain why the LSTM-RNN worked better than RNN without pre-training.

Comparing LSTM-RNN against DRNN [Irsoy and Cardie2014] gives us a hint about how to improve our model. From the experimental results, LSTM-RNN without the 300-D GloVe word embeddings performed worse than DRNN, while DRNN gained a significant improvement thanks to dropout. Finding a method like dropout that does not corrupt the LSTM memory might boost the overall performance significantly and will be a topic for our future work.

Acknowledgments

We thank three anonymous reviewers for helpful comments.

References

  • [Baroni et al.2013] Marco Baroni, Raffaella Bernardi, and Roberto Zamparelli. 2013. Frege in space: A program for compositional distributional semantics. In A. Zaenen, B. Webber, and M. Palmer, editors, Linguistic Issues in Language Technologies. CSLI Publications, Stanford, CA.
  • [Bengio et al.2013] Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. 2013. Advances in optimizing recurrent networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8624–8628. IEEE.
  • [Cybenko1989] George Cybenko. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314.
  • [Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization.

    The Journal of Machine Learning Research

    , pages 2121–2159.
  • [Elman1990] Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211.
  • [Gers2001] Felix Gers. 2001. Long short-term memory in recurrent neural networks. Unpublished PhD dissertation, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
  • [Glorot and Bengio2010] Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In

    International conference on artificial intelligence and statistics

    , pages 249–256.
  • [Goller and Küchler1996] Christoph Goller and Andreas Küchler. 1996.

    Learning task-dependent distributed representations by backpropagation through structure.

    In International Conference on Neural Networks, pages 347–352. IEEE.
  • [Graves2012] Alex Graves. 2012. Supervised sequence labelling with recurrent neural networks, volume 385. Springer.
  • [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  • [Hochreiter et al.2001] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In Kremer and Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
  • [Irsoy and Cardie2014] Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Advances in Neural Information Processing Systems, pages 2096–2104.
  • [Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655–665, Baltimore, Maryland, June. Association for Computational Linguistics.
  • [Kim2014] Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar, October. Association for Computational Linguistics.
  • [Le and Mikolov2014] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196.
  • [Le and Zuidema2014a] Phong Le and Willem Zuidema. 2014a. The inside-outside recursive neural network model for dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
  • [Le and Zuidema2014b] Phong Le and Willem Zuidema. 2014b. Inside-outside semantics: A framework for neural models of semantic composition. In NIPS 2014 Workshop on Deep Learning and Representation Learning.
  • [Mikolov et al.2010] Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048.
  • [Mitchell and Lapata2009] Jeff Mitchell and Mirella Lapata. 2009. Language models based on semantic composition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 430–439.
  • [Paulus et al.2014] Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural networks. In Advances in Neural Information Processing Systems, pages 2888–2896.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12.
  • [Rumelhart et al.1988] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back-propagating errors. Cognitive modeling, 5.
  • [Socher et al.2010] Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop.
  • [Socher et al.2011a] Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a.

    Dynamic pooling and unfolding recursive autoencoders for paraphrase detection.

    Advances in Neural Information Processing Systems, 24:801–809.
  • [Socher et al.2011b] Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. 2011b. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning, volume 2.
  • [Socher et al.2011c] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011c. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161.
  • [Socher et al.2013a] Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 455–465.
  • [Socher et al.2013b] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings EMNLP.
  • [Srivastava et al.2014] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958.
  • [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112.
  • [Tai et al.2015] Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075.
  • [Werbos1990] Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560.
  • [Zhu et al.2015] Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over tree structures. arXiv preprint arXiv:1503.04881.