Along with the success of representation learning (e.g., word2vec [Mikolov et al.2013]
), sentence embedding, which maps sentences into dense real-valued vectors that represent their semantics, has received much attention. It is playing a critical role in many applications such as sentiment analysis[Socher et al.2013], question answering [Wang and Nyberg2015] and entailment recognition [Bowman et al.2015].
There are three predominant approaches for constructing sentence embeddings. (1) Recurrent neural networks (RNNs) encode sentences word by word in sequential order[Dai and Le2015, Hill, Cho, and Korhonen2016]Blunsom, Grefenstette, and Kalchbrenner2014, Hu et al.2014]. However, the above two approaches cannot well encode linguistic composition of natural languages to some extent. (3) The last approach, on which this paper focuses, exploits tree-structured recursive neural networks (TreeRNNs) [Socher et al.2011, Socher et al.2013]
to embed a sentence along its parsing tree. Tree-structured Long Short-Term Memory (Tree-LSTM)[Tai, Socher, and Manning2015, Zhu, Sobihani, and Guo2015] is one of the most renowned variants of TreeRNNs that is shown to be effective in learning task-specific sentence embeddings [Bowman et al.2016].
Tree-LSTM models are motivated by the intuition that in human languages there are complicated hierarchical structures which contain rich semantics. Latent tree models [Yogatama et al.2016, Maillard, Clark, and Yogatama2017, Choi, Yoo, and goo Lee2017, Williams, Drozdov, and Bowman2017] can learn the optimal hierarchical structure, which may vary from tasks to tasks, without explicit structure annotations. The training signals to parse and embed sentences are both from certain downstream tasks. Existing models place all words in leaves equally and build the tree structure and the sentence embedding by composing adjacent node pairs bottom up (e.g., Figure 1b). This mechanism prevents the sentence embedding from focusing on the most informative words, resulting in a performance limitation on certain tasks [Shi et al.2018].
To address this issue, we propose an Attentive Recursive Tree model (AR-Tree) for sentence embedding, which is a novel framework that incorporates task-specific attention mechanism into the latent tree structure learning [dos Santos et al.2016]. AR-Tree represents a sentence as a binary tree that contains one word in each leaf and non-leaf node, similar to the dependency parsing tree [Nivre2003] but our AR-Tree does not depend on manual rules. To utilize the sequential information, we expect the tree’s in-order traversal preserves the word sequence, so that we can easily recover the original word sequence and obtain context of a word from its subtrees. As shown in Figure 1a, the key advantage of an AR-Tree is that those task-important words will be placed at those nodes near the root and will be naturally emphasized in tree-based embedding. This is attributed to our proposed top-down attention-first parsing strategy, inspired by easy-first parsing [Goldberg and Elhadad2010]. Specifically, we introduce a trainable scoring function to measure the word attention in a sentence with respect to a task. We greedily select the word with the highest score (e.g., interesting) as the root node and then recursively parse the remaining two subsequences (e.g., The movie is and to me.) to obtain two children of the parent node. After the tree construction, we embed the sentence using a modified Tree-LSTM unit [Tai, Socher, and Manning2015, Zhu, Sobihani, and Guo2015] in a bottom-up manner, i.e., the resultant embedding is obtained at the root node and is then applied in a downstream application. As the Tree-LSTM computes node vectors incrementally from leaf nodes to the root node, our model naturally pays more attention to those shallower words, i.e., task-informative words, meanwhile remaining advantages of the recursive semantic composition [Socher et al.2013, Zhu, Sobihani, and Guo2015].
Training AR-Tree is challenging due to the non-differentiability caused by the dynamic decision-making procedure. To this end, we develop a novel end-to-end training strategy based on REINFORCE algorithm [Williams1992]. To make REINFORCE work for the structure inference, we equip it with a weighted reward which is sensitive to the tree structures and a macro normalization strategy for the policy gradients.
We evaluate our model on three benchmarking tasks: textual entailment, sentiment classification, and author profiling. We show that AR-Tree outperforms previous Tree-LSTM models and is comparable to other state-of-the-art sentence embedding models. Further qualitative analyses demonstrate that AR-Tree learns reasonable task-specific attention structures.
To sum up, the contributions of our work are as follows:
We propose Attentive Recursive Tree (AR-Tree), a Tree-LSTM based sentence embedding model, which can parse the latent tree structure dynamically and emphasize informative words inherently.
We design a novel REINFORCE algorithm for the training of discrete tree parsing.
We demonstrate that AR-Tree outperforms previous Tree-LSTM models and is comparable to other state-of-the-art sentence embedding models in three benchmarks.
2 Related Work
Latent Tree-Based Sentence Embedding. [Bowman et al.2016] build trees and compose semantics via a generic shift-reduce parser, whose training relies on ground-truth parsing trees. In this paper, we are interested in latent trees that dynamically parse a sentence without syntax supervision. Combination of latent tree learning with TreeRNNs has been shown as an effective approach for sentence embedding as it jointly optimizes the sentence compositions and a task-specific objective. For example, [Yogatama et al.2016]
use reinforcement learning to train a shift-reduce parser without any ground-truth.
maillard2017jointly use a CYK chart parser [Cocke1970, Younger1967, Kasami1965] instead of the shift-reduce parser and make it fully differentiable with the help of the softmax annealing technique. However, their model suffers from both time and space issues as the chart parser requires time and space complexity. [Choi, Yoo, and goo Lee2017]
propose an easy-first parsing strategy, which scores each adjacent node pair using a query vector and greedily combines the best pair into one parent node at each step. They use Straight-Through Gumbel-Softmax estimator[Jang, Gu, and Poole2016] to compute parent embedding in a hard categorical gating way and enable the end-to-end training. [Williams, Drozdov, and Bowman2017] compare above-mentioned models on several datasets and demonstrate that [Choi, Yoo, and goo Lee2017] achieve the best performance.
Attention-Based Sentence Embedding. Attention-based methods can be divided into two categories: inter-attention [dos Santos et al.2016, Munkhdalai and Yu2017b], which requires a pair of sentences to attend with each other, and intra-attention [Arora, Liang, and Ma2016, Lin et al.2017], which does not require extra inputs except the sentence; thus the latter is more flexible than the former. [Kim et al.2017] incorporate structural distributions into attention networks using graphical models instead of recursive trees. Note that existing latent tree-based models treat all input words equally as leaf nodes and ignore the fact that different words make varying degrees of contributions to the sentence semantics, which is nevertheless the fundamental motivation of attention mechanism. To our best knowledge, AR-Tree is the first model that generates attentive tree structures and allows the TreeRNNs to focus on more informative words for sentence embeddings.
3 Attentive Recursive Tree
We represent an input sentence of words as , where is a -dimensional word embedding vector. For each sentence, we build an Attentive Recursive Tree (AR-Tree) where the root and nodes are denoted by and , respectively. Each node contains one word denoted as ( means the -th word of input sentence) and has two children denoted by and ( for missing cases). Following previous work [Choi, Yoo, and goo Lee2017], we discuss binary trees in this paper and leave the n-ary case for future work. To keep the important sequential information, we guarantee that the in-order traversal of corresponds to (i.e., all nodes in ’s left subtree must contain an index less than ). The most outstanding property of AR-Tree is that words with more task-specific information are closer to the root.
To achieve the property, we devise a scoring function to measure the degree of importance of words, and recursively select the word with the maximum score in a top-down manner. To obtain the sentence embedding, we apply a modified Tree-LSTM to embed the nodes bottom-up, i.e., from leaf to root. The resultant sentence embedding is fed into downstream tasks.
3.1 Top-Down AR-Tree Construction
We feed the input sentence into a bidirectional LSTM and obtain a context-aware hidden vector for each word:
where denote the hidden states and the cell states respectively. We utilize for scoring and let . Based on these context-aware word embeddings, we design a trainable scoring function to reflect the importance of each word:
where MLP can be any multi-layer perceptron parameterized by
. In particular, we use a 2-layer MLP with 128 hidden units and ReLU activation. Traditional tf-idf is a simple and intuitive method to reflect the degree of importance of words, however, it is not designed for specific tasks. We will use it as a baseline.
We use a recursive top-down attention-first strategy to construct AR-Tree. Given an input sentence and the scores for all the words, we select the word with the maximum score as the root and recursively deal with the remaining two subsequences (before and after the selected word) to obtain its two children. Algorithm 1 gives the procedure of constructing AR-Tree for sequence . We can obtain the whole sentence’s AR-Tree by calling and obtain by the traversal of all nodes. In the parsed AR-Tree, each node is most informative among its rooted subtree. Note that we do not use any extra information during the construction, thus AR-Tree is generic for any sentence embedding task.
3.2 Bottom-Up Tree-LSTM Embedding
After the AR-Tree construction, we use Tree-LSTM [Tai, Socher, and Manning2015, Zhu, Sobihani, and Guo2015], which introduces cell state into TreeRNNs to achieve better information flow, as the composition function to compute parent representation from its children and corresponding word in a bottom-up manner (i.e., Figure 2). Because the original word sequence is kept in the in-order traversal of the AR-Tree, Tree-LSTM units can utilize both the sequential and the structural information to compose semantics.
The complete Tree-LSTM composition function in our model is as follows:
where ,, , and , come from left child, right child, and bidirectional LSTM, respectively. For those nodes missing some inputs, such as the leaf nodes or nodes with only one child, we fill the missing inputs with zeros.
Finally, we use of the root as the embedding of the sentence and feed it into downstream tasks. The sentence embedding will focus on those informative words as they are closer to root and their semantics is emphasized naturally.
4 End-to-end Training Using REINFORCE
Our overall training loss combines the loss of the downstream task (e.g., the cross-entropy loss for classification tasks), the tree construction loss (discussed soon), and an L2 regularization term on all trainable parameters :
are trade-off hyperparameters.
We train the model only according to the downstream task and do not incorporate any structure supervision or pre-trained parser, leading to non-differentiability as the AR-Tree construction is a discrete decision-making process. Specifically, the scoring function cannot be learned in an end-to-end manner when optimizing . Inspired by [Yogatama et al.2016], we employ reinforcement learning, whose objective corresponds to , to train the scoring function.
We consider the construction of AR-Tree as a recursive decision-making process where each action selects a word for a node. For node , we define the state as its corresponding sequence , where and respectively represent the index of beginning and ending position. The action space is
. We feed scores of candidate words into a softmax layer as ourpolicy network
, which outputs a probability distribution over the action space:
where . Different from Algorithm 1 which is greedy and deterministic, at training, we construct AR-Trees randomly in terms of , to explore more structures and bring greater gain in the long run. After the action is sampled based on , the sequence is split into and , which are used respectively as two children’s states.
As for the reward , we consider the performance metric on a downstream task. For simplicity, we discuss a classification task in this paper and leave further explorations in future work. After the whole tree is recursively sampled based on , we can obtain the sentence embedding following Section 3.2
, feed it into the downstream classifier and obtain a predicted label. The sampled tree is considered good if the prediction is correct, and bad if the prediction is wrong. A simple rewarding strategy is to givefor all in a good tree, and for all in a bad tree.
However, we consider that an early decision from a longer sequence has a greater impact on the tree structure (e.g., the selection of root is more important than leaves). So we multiply the reward value by , i.e., for all in a good tree and for all in a bad tree.
We use REINFORCE [Williams1992], a widely-used policy gradient method in reinforcement learning, to learn parameters of the policy network. The goal of the learning algorithm is to maximize the expected long-term reward:
Following [Sutton et al.2000], the gradient w.r.t. the parameters of policy network can be derived as:
It is prohibitively expensive to calculate the accurate by iterating over all possible trees. Following [Yu et al.2017], we apply Monte Carlo search to estimate the expectation. Specifically, we sample trees for sentence , denoted as , each containing nodes. Then we can simplify by averaging rewards among all these nodes (micro average):
However, we observed that frequent words (e.g., the, is) were assigned high scores if we used Formula 8 to train . We think the reason is that the scoring function takes one single word embedding as input, meaning that frequent words will contribute more to its training gradient if rewards are averaged among all tree nodes, which is harmful to the score estimation of low-frequency words.
To eliminate the influence caused by the word frequency, we integrate input sentences as a mini-batch, sample trees for each of them, and normalize the gradient in word-level (macro average) rather than node-level:
where represents all words of the mini-batch, represents all nodes whose selected word is in all sampled trees. Figure 3 gives an example.
|SNLI-100D||100||100||200||0.1||128||2||Adam [Kingma and Ba2014]|
Experimental settings. Dropout: dropout probability. Bn: whether using batch normalization.
|Model||# params.||Acc. (%)|
|100D Latent Syntax Tree-LSTM [Yogatama et al.2016]||500k||80.5|
|100D CYK Tree-LSTM [Maillard, Clark, and Yogatama2017]||231k||81.6|
|100D Gumbel Tree-LSTM [Choi, Yoo, and goo Lee2017]||262k||82.6|
|100D Tf-idf Tree-LSTM (Ours)||343k||82.3|
|100D AR-Tree (Ours)||356k||82.8|
|300D SPINN [Bowman et al.2016]||3.7m||83.2|
|300D NSE [Munkhdalai and Yu2017a]||6.3m||84.8|
|300D NTI-SLSTM-LSTM [Munkhdalai and Yu2017b]||4.0m||83.4|
|300D Gumbel Tree-LSTM [Choi, Yoo, and goo Lee2017]||2.9m||85.0|
|300D Self-Attentive [Lin et al.2017]||4.1m||84.4|
|300D Tf-idf Tree-LSTM (Ours)||3.5m||84.5|
|300D AR-Tree (Ours)||3.6m||85.5|
|600D Gated-Attention BiLSTM [Chen et al.2017]||11.6m||85.5|
|300D Decomposable attention [Parikh et al.2016]||582k||86.8|
|300D NTI-SLSTM-LSTM global attention [Munkhdalai and Yu2017b]||3.2m||87.3|
|300D Structured Attention [Kim et al.2017]||2.4m||86.8|
Test accuracy and the number of parameters (excluding word embeddings) on the SNLI dataset. The above two sections list results of Tree-LSTM and other baseline models grouped by the dimension. The bottom section contains state-of-the-art inter-attention models on SNLI dataset.
We evaluate the proposed AR-Tree on three tasks: natural language inference, sentence sentiment analysis, and author profiling.
We set , in Eq. 4 through all experiments. For fair comparisons, we followed the experimental settings in [Choi, Yoo, and goo Lee2017] on language inference and sentence sentiment analysis. For the author profiling task whose dataset is provided by [Lin et al.2017], we followed their settings by contacting the authors. We considered their model, which is self-attentive but without tree structures, as a baseline, to show the effect of latent trees. We conducted Tf-idf Tree-LSTM experiment, which replaces the scoring function with tf-idf value while retaining all other settings, as one of our baselines. For all experiments, we saved the model that performed best on the validation set as our final model and evaluated it on the test set. The implementation is made publicly available.111https://github.com/shijx12/AR-Tree
5.1 Natural Language Inference
The natural language inference is a task of predicting the semantic relationship between two sentences, a premise, and a hypothesis. We evaluated our model using the Stanford Natural Language Inference corpus (SNLI; [Bowman et al.2015]), which aims to predict whether two sentences are entailment, contradiction, or neutral. SNLI consists of 549,367/9,842/9,824 premise-hypothesis pairs for train/validation/test sets respectively.
Following [Bowman et al.2016, Mou et al.2016], we ran AR-Tree separately on two input sentences to obtain their embeddings and . Then we constructed a feature vector for the pair by the following equation:
and fed the feature into a neural network, i.e., a multi-layer perceptron (MLP) which has a -dimentional hidden layer with ReLUactivation function and a softmax layer.
We conducted SNLI experiments with two settings: 100D () and 300D (). In both experiments, we initialized the word embedding matrix with GloVe pre-trained vectors [Pennington, Socher, and Manning2014], added a dropout [Srivastava et al.2014] after the word embedding layer and added batch normalization layers [Ioffe and Szegedy2015] followed by dropout to the input and the output of the MLP. Details can be found in Table 1. The training on an NVIDIA GTX1080 Ti needs about 30 hours, slower than Gumbel Tree-LSTM [Choi, Yoo, and goo Lee2017] because our tree construction is implemented for every single sentence instead of the whole batch.
|Model||SST-2 (%)||SST-5 (%)|
|LSTM [Tai, Socher, and Manning2015]||84.9||46.4|
|Bidirectional LSTM [Tai, Socher, and Manning2015]||87.5||49.1|
|RNTN [Socher et al.2013]||85.4||45.7|
|DMN [Kumar et al.2016]||88.6||52.1|
|NSE [Munkhdalai and Yu2017a]||89.7||52.8|
|BCN+Char+CoVe [McCann et al.2017]||90.3||53.7|
|byte-mLSTM [Radford, Jozefowicz, and Sutskever2017]||91.8||52.9|
|Constituency Tree-LSTM [Tai, Socher, and Manning2015]||88.0||51.0|
|Latent Syntax Tree-LSTM [Yogatama et al.2016]||86.5||-|
|NTI-SLSTM-LSTM [Munkhdalai and Yu2017b]||89.3||53.1|
|Gumble Tree-LSTM [Choi, Yoo, and goo Lee2017]||90.1||52.5|
|Tf-idf Tree-LSTM (Ours)||88.9||51.3|
Table 2 summarizes the results. We can see that our 100D and 300D models perform best among the Tree-LSTM models. State-of-the-art inter-attention models get the highest performance on SNLI, because they incorporate inter-information between the sentence pairs to boost the performance. However, inter-attention is limited for paired inputs and lacks flexibility. Our 300D model outperforms self-attentive [Lin et al.2017], the state-of-the-art intra-attention model, by 1.1%, demonstrating its effectiveness.
5.2 Sentiment Analysis
We used Stanford Sentiment Treebank (SST) [Socher et al.2013] to evaluate the performance of our model. The sentences in SST dataset are parsed into binary trees with the Stanford parser, and each subtree corresponding to a phrase is annotated with a sentiment score. It includes two subtasks: SST-5, classifying each phrase into 5 classes, and SST-2, preserving only 2 classes.
Following [Choi, Yoo, and goo Lee2017], we used all phrases for training but only the entire sentences for evaluation. We used an MLP with a -dimensional hidden layer as the classifier. For both SST-2 and SST-5, we initialized the word embeddings with GloVe 300D pre-trained vectors, and added dropout to the word embedding layer and the input and the output of the MLP. Table 1 lists the parameter details.
Table 3 shows the results of SST experiments. Our model on SST-2 outperforms all Tree-LSTM models and other state-of-the-art models except Byte-mLSTM [Radford, Jozefowicz, and Sutskever2017], a byte-level language model trained on a very large corpus. [McCann et al.2017] obtains the highest performance on SST-5 due to the help of pretraining and character n-gram embeddings. Without the help of character-level information, our model can still get comparable results on SST-5.
5.3 Author Profiling
The Author Profiling dataset consists of Twitter tweets and some annotations about age and gender of the user writing the tweet. Following [Lin et al.2017] we used English tweets as input to predict the age range of the user, including 5 classes: 18-24, 25-34, 35-49, 50-64 and 65+. The age prediction dataset consists of 68,485/4,000/4,000 tweets for train/validation/test sets.
We applied GloVe and dropout as in the SST experiments. Table 1 describes detailed settings, which are the same as [Lin et al.2017]’s published implementation except for the optimizer (they use SGD but we find Adam converges better).
|BiLSTM+MaxPooling [Lin et al.2017]||77.40|
|CNN+MaxPooling [Lin et al.2017]||78.15|
|Gumble Tree-LSTM [Choi, Yoo, and goo Lee2017]||80.23|
|Self-Attentive [Lin et al.2017]||80.45|
|Tf-idf Tree-LSTM (Ours)||80.20|
Results of the age prediction experiments are shown in Table 4. We can see that our model outperforms all other baseline models. Compared to self-attentive model, our AR-Tree model obtains higher performance in the same experimental settings, indicating that latent structures are helpful to sentence understanding.
6 Qualitative Analysis
We conducted experiments to observe structures of the learned trees. We select 2 sentences from the test set of three experiment datasets respectively and show their attentive trees in Figure 4.
The left column is a sentence pair with relationship contradiction from SNLI. Figure 4a and 4b both focus on the predicate word chased firstly, then focus on its subject and object respectively. The middle column is from SST-2, the sentiment analysis dataset. Both Figure 4c and 4d focus on emotional adjectives such as embarrassing, amusing and enjoyable. The right column is from the age prediction dataset, predicting the author’s age based on the tweet. Figure 4e attends to @Safety_1st, a baby production, indicating that the author is probably a young parent. Figure 4f focuses on lecturers which suggests that the author is likely to be a college student.
Furthermore, we applied parsers trained on different tasks to the same sentence, and show results in Figure 5. The parser of SNLI focuses on partially (Figure 5a), as SNLI is an inference dataset and pays more attention to words which may be different in two sentences to reflect the contradiction relationship (e.g., partially v.s. totally). The parser of SST-2, the sentiment classification task, focuses on sentimental words (Figure 5b) as we have expected. In the parsed results of age prediction, academic and papers are emphasized (Figure 5c) because they are more likely to be discussed by college students, and are more informative to the age prediction task than other words.
Our model is able to pay attention to task-specific critical words for different tasks and learn interpretable structures, which is beneficial to the sentence understanding.
7 Conclusions and Future Work
We propose Attentive Recursive Tree (AR-Tree), a novel yet generic latent Tree-LSTM sentence embedding model, learning to learn task-specific structural embedding guided by word attention. Results on three different datasets demonstrate that AR-Tree learns reasonable attentive tree structures and outperforms previous Tree-LSTM models.
Moving forward, we are going to design a batch-mode tree construction algorithm, e.g., asynchronous parallel recursive tree construction, to make the full exploitation of distributed and parallel computing power. Therefore, we may able to learn an AR-Forest to embed paragraphs.
The work is supported by National Key Research and Development Program of China (2017YFB1002101), NSFC key project (U1736204, 61533018), and THUNUS NExT Co-Lab.
- [Arora, Liang, and Ma2016] Arora, S.; Liang, Y.; and Ma, T. 2016. A simple but tough-to-beat baseline for sentence embeddings.
- [Blunsom, Grefenstette, and Kalchbrenner2014] Blunsom, P.; Grefenstette, E.; and Kalchbrenner, N. 2014. A convolutional neural network for modelling sentences. In ACL.
- [Bowman et al.2015] Bowman, S. R.; Angeli, G.; Potts, C.; and Manning, C. D. 2015. A large annotated corpus for learning natural language inference. In EMNLP.
- [Bowman et al.2016] Bowman, S. R.; Gauthier, J.; Rastogi, A.; Gupta, R.; Manning, C. D.; and Potts, C. 2016. A fast unified model for parsing and sentence understanding. In ACL.
- [Chen et al.2017] Chen, Q.; Zhu, X.; Ling, Z.-H.; Wei, S.; Jiang, H.; and Inkpen, D. 2017. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In RepEval.
- [Choi, Yoo, and goo Lee2017] Choi, J.; Yoo, K. M.; and goo Lee, S. 2017. Learning to compose task-specific tree structures. In AAAI.
- [Cocke1970] Cocke, J. 1970. Programming languages and their compilers: Preliminary notes.
- [Dai and Le2015] Dai, A. M., and Le, Q. V. 2015. Semi-supervised sequence learning. In NIPS.
- [dos Santos et al.2016] dos Santos, C. N.; Tan, M.; Xiang, B.; and Zhou, B. 2016. Attentive pooling networks. CoRR, abs/1602.03609.
- [Goldberg and Elhadad2010] Goldberg, Y., and Elhadad, M. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In NAACL.
[Hill, Cho, and
Hill, F.; Cho, K.; and Korhonen, A.
Learning distributed representations of sentences from unlabelled data.In NAACL.
- [Hu et al.2014] Hu, B.; Lu, Z.; Li, H.; and Chen, Q. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS.
- [Ioffe and Szegedy2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML.
- [Jang, Gu, and Poole2016] Jang, E.; Gu, S.; and Poole, B. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.
- [Kasami1965] Kasami, T. 1965. An efficient recognition and syntaxanalysis algorithm for context-free languages. Technical report.
- [Kim et al.2017] Kim, Y.; Denton, C.; Hoang, L.; and Rush, A. M. 2017. Structured attention networks. arXiv preprint arXiv:1702.00887.
- [Kingma and Ba2014] Kingma, D., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[Kumar et al.2016]
Kumar, A.; Irsoy, O.; Ondruska, P.; Iyyer, M.; Bradbury, J.; Gulrajani, I.;
Zhong, V.; Paulus, R.; and Socher, R.
Ask me anything: Dynamic memory networks for natural language processing.In ICML.
- [Lin et al.2017] Lin, Z.; Feng, M.; Santos, C. N. d.; Yu, M.; Xiang, B.; Zhou, B.; and Bengio, Y. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130.
- [Maillard, Clark, and Yogatama2017] Maillard, J.; Clark, S.; and Yogatama, D. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-lstms. arXiv preprint arXiv:1705.09189.
- [McCann et al.2017] McCann, B.; Bradbury, J.; Xiong, C.; and Socher, R. 2017. Learned in translation: Contextualized word vectors. In NIPS.
- [Mikolov et al.2013] Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In NIPS.
[Mou et al.2016]
Mou, L.; Men, R.; Li, G.; Xu, Y.; Zhang, L.; Yan, R.; and Jin, Z.
Natural language inference by tree-based convolution and heuristic matching.In ACL.
- [Munkhdalai and Yu2017a] Munkhdalai, T., and Yu, H. 2017a. Neural semantic encoders. In ACL.
- [Munkhdalai and Yu2017b] Munkhdalai, T., and Yu, H. 2017b. Neural tree indexers for text understanding. In ACL.
- [Nivre2003] Nivre, J. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT.
- [Parikh et al.2016] Parikh, A.; Täckström, O.; Das, D.; and Uszkoreit, J. 2016. A decomposable attention model for natural language inference. In EMNLP.
- [Pennington, Socher, and Manning2014] Pennington, J.; Socher, R.; and Manning, C. 2014. Glove: Global vectors for word representation. In EMNLP.
- [Radford, Jozefowicz, and Sutskever2017] Radford, A.; Jozefowicz, R.; and Sutskever, I. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.
- [Shi et al.2018] Shi, H.; Zhou, H.; Chen, J.; and Li, L. 2018. On tree-based neural sentence modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4631–4641.
[Socher et al.2011]
Socher, R.; Pennington, J.; Huang, E. H.; Ng, A. Y.; and Manning, C. D.
Semi-supervised recursive autoencoders for predicting sentiment distributions.In EMNLP.
- [Socher et al.2013] Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
- [Srivastava et al.2014] Srivastava, N.; Hinton, G. E.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR.
- [Sutton et al.2000] Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS.
- [Tai, Socher, and Manning2015] Tai, K. S.; Socher, R.; and Manning, C. D. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL.
- [Wang and Nyberg2015] Wang, D., and Nyberg, E. 2015. A long short-term memory model for answer sentence selection in question answering. In ACL.
- [Williams, Drozdov, and Bowman2017] Williams, A.; Drozdov, A.; and Bowman, S. R. 2017. Learning to parse from a semantic objective: It works. is it syntax? arXiv preprint arXiv:1709.01121.
- [Williams1992] Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning.
- [Yogatama et al.2016] Yogatama, D.; Blunsom, P.; Dyer, C.; Grefenstette, E.; and Ling, W. 2016. Learning to compose words into sentences with reinforcement learning. arXiv preprint arXiv:1611.09100.
- [Younger1967] Younger, D. H. 1967. Recognition and parsing of context-free languages in time n3. Information and control.
- [Yu et al.2017] Yu, L.; Zhang, W.; Wang, J.; and Yu, Y. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI.
- [Zeiler2012] Zeiler, M. D. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
- [Zhu, Sobihani, and Guo2015] Zhu, X.; Sobihani, P.; and Guo, H. 2015. Long short-term memory over recursive structures. In ICML.