1 Introduction & related work
State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make ‘local’ generalizations(Pinker and Prince, 1988; Fodor and Pylyshyn, 1988; Marcus, 1998; Lake et al., 2017; Lake and Baroni, 2017; Zhang et al., 2016; Krueger et al., 2017; Marcus, 2018)
. This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied.
In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously ‘connectionist’: trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks (Eliasmith, 2013) or neuromorphic hardware (Merolla et al., 2014). We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics.
The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization.
Data-driven models have proven successful in various entailment recognition tasks (Baroni et al., 2012; Socher et al., 2012; Rocktäschel et al., 2014; Bowman et al., 2015b; Rocktäschel et al., 2015). The data sets used in research on this topic tend to be either fully formal, focusing on logic instead of natural language (Evans et al., 2018; Allamanis et al., 2016), or fully natural, as is the case for manually annotated data sets of English sentence pairs such as SICK (Marelli et al., 2014) or SNLI (Bowman et al., 2015a). Moreover, entailment recognition models are often endowed with functionality reflecting pre-established linguistic or semantic regularities of the data (Bankova et al., 2016; Serafini and Garcez, 2016; Sadrzadeh et al., 2018). Recently, Shen et al. (2018) showed that recurrent networks can learn to recognize logical inference relations if they are extended with a bias towards modelling hierarchical structures.
In this research we do not approach entailment as something fully natural or fully formal, but as a semantic phenomenon that can be recognized in language but that is produced by logic. This perspective was also taken by Bowman et al. (2015b), who used a natural logic calculus to infer the entailment relations between pairs of sentences in an artificial language. As opposed to Bowman et al., we do not use natural logic, which is incomplete and not provably sound, but classical first-order logic (FOL). Furthermore, Bowman et al. used recursive neural networks, shaped according to the syntactic structure of the input sentences, whereas we focus on recurrent networks that receive no linguistic information, and that have no explicit bias to accommodate syntactic hierarchies.
2 Task definition & data generation
The data generation process is inspired by Bowman et al. (2015b): an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL.
Let be the artificial language. Its vocabulary consists of four classes: quantifiers, nouns, (transitive) verbs and adverbs, represented by , , , , respectively. Lexical meanings of nouns and verbs are captured by a taxonomy of terms, as visualized in the Venn diagrams of Figure 1. contains the quantifiers all and some. includes the adverbs not and (the empty string). Sentences in can be generated according to the phrase structure grammar of Table 1.
|S NP VP||Det Adv Quant||NP Det NP||Quant||NP Adv N|
|N||VP VP NP||V||VP Adv V||Adv|
Following MacCartney and Manning (2009) and Bowman et al. (2015b), seven different entailment relations are distinguished, as defined in Table 2. The relations are defined with respect to pairs of sets, but they are also applied to pairs of sentences, which can be interpreted as the sets of possible worlds where they hold true.
We generate random pairs of sentences according to the grammar of (e.g., ‘all Germans love all Romans’ and ‘some Europeans like some Italians’). We then annotate these pairs with one of the 7 logical relations using the combination of an automated theorem prover for FOL with equality, Prover9, and a model builder, Mace4, proposed by McCune (2010). If Prover9 does not manage to find a proof in time, Mace4 takes over. To find the correct entailment relations between sentence pairs, we provide the theorem prover with the FOL translations of the actual sentences and the relevant lexical entailment relations in (the axioms)111 The speed of Prover9 and Mace4 rapidly decreases as the number of axioms grows, so it is essential to keep the set of constraints considered per derivation as limited as possible. Hence, before computing whether , the collection of axioms is filtered in such a way as to retain the minimal set of formulas that could possibly be used in the proof or refutation of this particular entailment. This is done by dismissing all axioms containing predicates that do not occur in either or . E.g., if and , then all constraints in containing terms not in are omitted. As the first term in a FOL representation of a sentence (c.q. and ) is always a noun from , while the second term (c.q. and ) is always a verb from , only those axioms are used that relate the noun predicates or the verb predicates of both sentences to each other. No axioms combining terms from and exist, so this is generally the case. Additionally, not only identical but also equivalent axioms are eliminated. That is, if e.g. is already included, is redundant and cannot be added as well. . The FOL representations of the axioms are derived according to the mapping in Table 3.
The grammar of allows for the expression of approximately 40 million unique sentence pairs. In the default training set, we use slightly fewer than 30,000 of these pairs. The test set contains approximately 5,000 pairs. Sentences occur at most once in the data. A small sample is shown in Figure 2. Appendix A.1 shows the distribution in the train and test set of the seven entailment relations.
3 Learning models
Our main model is a recurrent network, sketched in Figure 3. It is a so-called ‘Siamese’ network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to Bowman et al.
’s recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence.
Our set-up resembles the Siamese architecture for learning sentence similarity of Mueller and Thyagarajan (2016)
and the LSTM classifier described inBowman et al. (2015a). In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN (Elman, 1990), GRU (Cho et al., 2014) and LSTM (Hochreiter and Schmidhuber, 1997).
Dimensionality of hidden units, word embeddings and comparison layers is 128, 25 and 75, respectively. All recurrent networks have a single hidden layer. Prior to training, all hidden units are initialized as zero vectors. Network parameters are initialized by sampling from a uniform distribution; word embeddings by sampling from a normal distribution. Weights of the recurrent units are drawn from the uniform distribution. In our case , so the lower bound is and the upper bound . Non-recurrent parameters, belonging to the linear comparison and classification layers, are initialized uniformly randomly according to distribution , where denotes the number of input units. The comparison layer is initialized according to , because its input is the concatenation of two 25-dimensional sentence vectors, and the initial classification layer weights are drawn from , because the comparison layer outputs are 75-dimensional. AdaDelta (Zeiler, 2012) is used as optimizer. No dropout is applied.
We consider three baselines used in earlier work by Bowman et al. (2015b)
: the recursive (tree-shaped) neural network (tRNN) and the recursive neural tensor network (tRNTN), which process the sentences according to their syntactic structure, and a simple bag-of-words model, implemented as a summing neural network based on a unweighted vector mixture model (sumNN).
Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table4. All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix A.2.
Training and testing accuracy scores on the FOL inference task. Mean and standard deviation over five runs.
The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper.
5 Zero-shot, compositional generalization
Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning (Chomsky (1957); Montague (1970)). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. Fodor and Pylyshyn (1988), Marcus (1998), Calvo and Symons (2014). Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by Lake et al. (2017) and Lake and Baroni (2017). Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality.
In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words.
5.1 Unseen lengths
We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space (Marcus, 2003; Kaiser and Sutskever, 2015; Reed and De Freitas, 2015; Evans and Grefenstette, 2018). We want to test if this is the case for the recurrent models studied in this paper. The language licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed.
In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table 5.
All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all.
5.2 Unseen words
In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings (Pennington et al., 2014) that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments.
One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word , and consequently with respect to that same fragment after replacing the original word with its synonym . The pairs of words, the cosine distance between their GloVe embeddings and the obtained results are listed in Table 6.
For the first three examples in Table 6, substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing ‘hate’ into ‘detest’. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing ‘love’ and ‘adore’ (which have a cosine distance of 0.57). Nonetheless, replacing ‘hate’ with ‘detest’ confuses the model, whereas substitution of ‘love’ into ‘adore’ only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences.
In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms ‘ontological twins’. Technically, if is an arbitrary lexical entailment relation and is an ontology, then and are ontological twins if and only if and for all , if then . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of it is also the case for pairs of terms that maintain the same lexical entailment relations to all other terms in the taxonomy.
Examples of ontological twins in the taxonomy of nouns are ‘Romans’ and ‘Venetians’ . This can easily be verified in the Venn diagram of Figure 0(a) by replacing ‘Romans’ with ‘Venetians’ and observing that the same hierarchy applies. The same holds for e.g. ‘Germans’ and ‘Polish’ or for ‘children’ and ‘students’. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word , and with respect to that same fragment after replacing the original word with ontological twin . Results are shown in Table 7.
The examples in Table 7
suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing ‘Romans’ with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. ‘Germans’ can be changed into ‘Polish’ without significant deterioration, but substitution with ‘Dutch’ greatly decreases testing accuracy. The situation is even worse for ‘Spanish’. Again, cosine similarity provides no explanation - ‘Spanish’ is still closer to ‘Germans’ than ‘Neapolitans’ to ‘Romans’. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing ‘children’ with ‘students’, ‘women’ or ‘linguists’, testing scores are still decent.
So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function , mapping words to substitutes . Not all items have to be replaced. For an ontology , the function must be such that for any and lexical entailment relation , . The result of applying can be called an ‘alternative hierarchy’.
An example of an alternative hierarchy is the result of the replacement function that maps ‘Romans’ to ‘Parisians’ and ‘Italians’ to ‘French’. Performing this substitution in the Venn diagram of Figure 0(a) shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing ‘Romans’ or ‘Italians’, and consequently on the same fragment after implementing replacement and providing the model with the GloVe embeddings of the unseen words. Replacement is incrementally modified up until replacement , which substitutes all nouns in . The results of applying to are shown in Table 8.
The results are positive: the GRU obtains 86.7% accuracy even after applying , which substitutes the entire ontology so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions to . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words.
What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: , and . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect.
Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results.
6 Discussion & Conclusions
We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks.
The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a ‘memorize-and-interpolate’ account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer.
- Allamanis et al. (2016) Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, and Charles Sutton. 2016. Learning continuous semantic representations of symbolic expressions. arXiv preprint arXiv:1611.01423.
- Bankova et al. (2016) Desislava Bankova, Bob Coecke, Martha Lewis, and Daniel Marsden. 2016. Graded entailment for compositional distributional semantics. arXiv preprint arXiv:1601.04908.
- Baroni et al. (2012) Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32. Association for Computational Linguistics.
- Bowman et al. (2015a) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015a. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
- Bowman et al. (2015b) Samuel R Bowman, Christopher Potts, and Christopher D Manning. 2015b. Recursive neural networks can learn logical semantics. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), page 12–21. Association for Computational Linguistics.
- Calvo and Symons (2014) Paco Calvo and John Symons. 2014. The Architecture of Cognition: Rethinking Fodor and Pylyshyn’s Systematicity Challenge. MIT Press.
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
- Chomsky (1957) Noam Chomsky. 1957. Syntactic structures. Mouton, Berlin.
- Eliasmith (2013) Chris Eliasmith. 2013. How to build a brain: A neural architecture for biological cognition. Oxford University Press.
- Elman (1990) Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211.
Evans and Grefenstette (2018)
Richard Evans and Edward Grefenstette. 2018.
Learning explanatory rules from noisy data.
Journal of Artificial Intelligence Research, 61:1–64.
- Evans et al. (2018) Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. 2018. Can neural networks understand logical entailment? arXiv preprint arXiv:1802.08535.
- Fodor and Pylyshyn (1988) Jerry A Fodor and Zenon W Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
- Kaiser and Sutskever (2015) Łukasz Kaiser and Ilya Sutskever. 2015. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228.
- Krueger et al. (2017) David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S Kanwal, Tegan Maharaj, Emmanuel Bengio, Asja Fischer, and Aaron Courville. 2017. Deep nets don’t learn via memorization.
- Lake and Baroni (2017) Brenden M Lake and Marco Baroni. 2017. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. arXiv preprint arXiv:1711.00350.
- Lake et al. (2017) Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. 2017. Building machines that learn and think like people. Behavioral and Brain Sciences, 40.
- MacCartney and Manning (2009) Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics, pages 140–156. Association for Computational Linguistics.
- Marcus (2018) Gary Marcus. 2018. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631.
- Marcus (1998) Gary F Marcus. 1998. Rethinking eliminative connectionism. Cognitive psychology, 37(3):243–282.
- Marcus (2003) Gary F Marcus. 2003. The algebraic mind: Integrating connectionism and cognitive science. MIT press.
- Marelli et al. (2014) Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In SemEval@ COLING, pages 1–8.
- McCune (2010) W McCune. 2010. Prover9 and mace4.
- Merolla et al. (2014) Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673.
- Montague (1970) Richard Montague. 1970. Universal grammar. Theoria, 36(3):373–398.
- Mueller and Thyagarajan (2016) Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2786–2792.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
- Pinker and Prince (1988) Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73–193.
- Reed and De Freitas (2015) Scott Reed and Nando De Freitas. 2015. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279.
- Rocktäschel et al. (2014) Tim Rocktäschel, Matko Bošnjak, Sameer Singh, and Sebastian Riedel. 2014. Low-dimensional embeddings of logic. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 45–49.
- Rocktäschel et al. (2015) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664.
- Sadrzadeh et al. (2018) Mehrnoosh Sadrzadeh, Dimitri Kartsaklis, and Esma Balkır. 2018. Sentence entailment in compositional distributional semantics. Annals of Mathematics and Artificial Intelligence, pages 1–30.
- Serafini and Garcez (2016) Luciano Serafini and Artur d’Avila Garcez. 2016. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422.
- Shen et al. (2018) Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2018. Ordered neurons: Integrating tree structures into recurrent neural networks.
- Socher et al. (2012) Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics.
- Zeiler (2012) Matthew D. Zeiler. 2012. Adadelta: An adaptive learning rate method. CoRR.
- Zhang et al. (2016) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.