The natural language inference task, also known as recognizing textual entailment (RTE), is to infer the relation between a pair of sentences (e.g., premise and hypothesis).
This task is challenging, since it requires a model to fully understand the sentence meaning, (i.e., lexical and compositional semantics). For instance, the following example from MultiNLI dataset Williams et al. (2017) illustrates the need for a form of multi-step synthesis of information between premise : “If you need this book, it is probably too late unless you are about to take an SAT or GRE.”
: “If you need this book, it is probably too late unless you are about to take an SAT or GRE.”, and hypothesis: “It’s never too late, unless you’re about to take a test.” To predict the correct relation between these two sentences, the model needs to first infer that “SAT or GRE” is a “test”, and then pick the correct relation, e.g., contradiction.
This kind of iterative process can be viewed as a form of multi-step inference. To best of our knowledge, all of works on NLI use a single step inference. Inspired by the recent success of multi-step inference on Machine Reading Comprehension (MRC) Hill et al. (2016); Dhingra et al. (2016); Sordoni et al. (2016); Kumar et al. (2015); Liu et al. (2018); Shen et al. (2017); Xu et al. (2018), we explore the multi-step inference strategies on NLI. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. We show that our model outperforms single-step inference and further achieves the state-of-the-art on SNLI, MultiNLI, SciTail, and Quora Question Pairs datasets.
2 Multi-step inference with SAN
The natural language inference task as defined here involves a premise of words and a hypothesis of words, and aims to find a logic relationship between and , which is one of labels in a close set: entailment, neutral and contradiction. The goal is to learn a model .
In a single-step inference architecture, the model directly predicts given and as input. In our multi-step inference architecture, we additionally incorporate a recurrent state ; the model processes multiple passes through and , iteratively refining the state , before finally generating the output at step , where is an a priori chosen limit on the number of inference steps.
. Compared to the original SAN for MRC, in the SAN for NLI we simplify the bottom layers and Self-attention layers since the length of the premise and hypothesis is short). We also modify the answer module from prediction a text span to an NLI classification label. Overall, it contains four different layers: 1) the lexicon encoding layer computes word representations; 2) the contextual encoding layer modifies these representations in context; 3) the memory generation layer gathers all information from the premise and hypothesis and forms a “working memory” for the final answer module; 4) the final answer module, a type of multi-step network, predicts the relation between the premise and hypothesis.
Lexicon Encoding Layer. First we concatenate word embeddings and character embeddings to handle the out-of-vocabulary words111We omit POS Tagging and Name Entity Features for simplicity. Following Liu et al. (2018), we use two separate two-layer position-wise feedforward network Vaswani et al. (2017) to obtain the final lexicon embedings, and , for the tokens in and , respectively. Here, is the hidden size.
Contextual Encoding Layer. Two stacked BiLSTM layers are used on the lexicon encoding layer to encode the context information for each word in both and . Due to the bidirectional layer, it doubles the hidden size. We use a maxout layer Goodfellow et al. (2013) on the BiLSTM to shrink its output into its original hidden size. By a concatenation of the outputs of two BiLSTM layers, we obtain and as representation of and , respectively.
Memory Layer. We construct our working memory via an attention mechanism. First, a dot-product attention is adopted like in Vaswani et al. (2017) to measure the similarity between the tokens in and . Instead of using a scalar to normalize the scores as in Vaswani et al. (2017), we use a layer projection to transform the contextual information of both and :
where is an attention matrix, and dropout is applied for smoothing. Note that and is transformed from and
by one layer neural network, respectively. Next, we gather all the information on premise and hypothesis by: and . The semicolon
indicates vector/matrix concatenation;is the transpose of . Last, the working memory of the premise and hypothesis is generated by using a BiLSTM based on all the information gathered: and .
Answer module. Formally, our answer module will compute over memory steps and output the relation label. At the beginning, the initial state is the summary of the : , where . At time step in the range of , the state is defined by . Here, is computed from the previous state and memory : and . Following Mou et al. (2015)
, one layer classifier is used to determine the relation at each step.
At last, we utilize all of the outputs by averaging the scores:
is a probability distribution over all the relations,. During training, we apply stochastic prediction dropout before the above averaging operation. During decoding, we average all outputs to improve robustness.
This stochastic prediction dropout is similar in motivation to the dropout introduced by Srivastava et al. (2014). The difference is that theirs is dropout at the intermediate node-level, whereas ours is dropout at the final layer-level. Dropout at the node-level prevents correlation between features. Dropout at the final layer level, where randomness is introduced to the averaging of predictions, prevents our model from relying exclusively on a particular step to generate correct output.
Here, we evaluate our model in terms of accuracy on four benchmark datasets. SNLI Bowman et al. (2015) contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr30 corpus, and hypothesis are manually annotated. MultiNLI Williams et al. (2017) contains 433k sentence pairs, which are collected similarly as SNLI. However, the premises are collected from a broad range of genre of American English. The test and development sets are further divided into in-domain (matched) and cross-domain (mismatched) sets. The Quora Question Pairs dataset Wang et al. (2017) is proposed for paraphrase identification. It contains 400k question pairs, and each question pair is annotated with a binary value indicating whether the two questions are paraphrase of each other. SciTail dataset is created from a science question answering (SciQ) dataset. It contains 1,834 questions with 10,101 entailments examples and 16,925 neutral examples. Note that it only contains two types of labels, so is a binary task.
3.2 Implementation details
The spaCy tool222https://spacy.io
is used to tokenize all the dataset and PyTorch is used to implement our models. We fix word embedding with 300-dimensional GloVe word vectorsPennington et al. (2014). For the character encoding, we use a concatenation of the multi-filter Convolutional Neural Nets with windows and the hidden size .333We limit the maximum length of a word by 20 characters. The character embedding size is set to 20. So lexicon embeddings are =600-dimensions. The embedding for the out-of-vocabulary is zeroed. The hidden size of LSTM in the contextual encoding layer, memory generation layer is set to 128, thus the input size of output layer is 1024 (128 * 2 * 4) as Eq 2. The projection size in the attention layer is set to 256. To speed up training, we use weight normalization Salimans and Kingma (2016). The dropout rate is 0.2, and the dropout mask is fixed through time steps Gal and Ghahramani (2016) in LSTM. The mini-batch size is set to 32. Our optimizer is Adamax Kingma and Ba (2014)
and its learning rate is initialized as 0.002 and decreased by 0.5 after each 10 epochs.
One main question which we would like to address is whether the multi-step inference help on NLI. We fixed the lower layer and only compare different architectures for the output layer:
SAN: The multi-step inference model. We use 5-steps with the prediction dropout rate on the all experiments.
Table 1 shows that our multi-step model consistently outperforms the single-step model on the dev set of all four datasets in terms of accuracy. For example, on SciTail dataset, SAN outperforms the single-step model by +3.89 (85.46 vs 89.35).
We compare our results with the state-of-the-art in Table 2. Our model achieves the best performance on SciTai and Quora Question tasks. For instance, SAN obtains 89.4 (vs 89.1) and 88.4 (88.3) on the Quora Question and SciTail test set, respectively and set the new state-of-the-art. On SNLI and MultiNLI dataset, ESIM+ELMo Peters et al. (2018), GPT Radford et al. (2018) and BERT Devlin et al. (2018) use a large amount of external knowledge or a large scale pretrained contextual embeddings. However, SAN is still competitive these models. On SciTail dataset, SAN even outperforms GPT. Due to the space limitation, we only list two top models.555See leaderboard for more information: https://www.kaggle.com/c/multinli-matched-open-evaluation, https//www.kaggle.com/c/multinli-mismatched-open-evaluation, https://nlp.stanford.edu/projects/snli, http://data.allenai.org/scitail/leaderboard/.
We further utilize BERT as a feature extractor666We run BERT (the base model) to extract embeddings of both premise and hypothesis and then feed it to answer models for a fair comparison. and use the SAN answer module on top of it. Comparing with Single-step baseline, the proposed model obtains +2.8 improvement on the SciTail test set (94.0 vs 91.2) and +2.1 improvement on the SciTail dev set (96.1 vs 93.9). This shows the generalization of the proposed model which can be easily adapted on other models 777Due to highly time consumption and space limitation, we omit the results using BERT on SNLI/MNLI/Quora Question dataset..
|DIINGong et al. (2017)||78.8||77.8|
|BERTDevlin et al. (2018)||86.7||85.9|
|SNLI Dataset (Accuracy%)|
|GPTRadford et al. (2018)||89.9|
|Quora Question Dataset (Accuracy%)|
|Tomar et al. (2017)||88.4|
|Gong et al. (2017)||89.1|
|SciTail Dataset (Accuracy%)|
|Khot et al. (2018)||77.3|
|GPTRadford et al. (2018)||88.3|
|Word overlap||63 %||86%||76%||92%|
Analysis: How many steps it needs? We search the number of steps from 1 to 10. We observe that when increases, our model obtains a better improvement (e.g., 86.7 ()); however when or , it achieves best results (89.4) on SciTail dev set and then begins to downgrade the performance. Thus, we set in all our experiments.
We also looked internals of our answer module by dumping predictions of each step (the max step is set to 5). Here is an example888Its ID is id 144185n with premise (And he said, What’s going on?) and hypothesis (I told him to mind his own business.) from MutiNLI dev set. Our model produces total 5 labels (contradiction, neutral, neutral, neutral, and neutral) at each step and makes the final decision by voting neutral. Surprising, we found that human annotators also gave different 5 labels: contradiction, neutral, neutral, neutral, neutral. It shows robustness of our model which uses collective wise.
Finally, we analyze our model on the annotated subset999https://www.nyu.edu/projects/bowman/multinli/multinli_1.0
_annotations.zip of development set of MultiNLI. It contains 1,000 examples, each tagged by categories shown in Table 3. Our model outperforms the best system in RepEval 2017 Chen et al. (2017) in most cases, except on “Conditional” and “Tense Difference” categories. We also find that SAN works extremely well on “Active/Passive” and “Paraphrase” categories. Comparing with Chen’s model, the biggest improvement of SAN (50% vs 77% and 58% vs 85% on Matched and Mismatched settings respectively) is on the “Antonym” category. In particular, on the most challenging “Long Sentence” and “Quantity/Time” categories, SAN’s result is substantially better than previous systems. This demonstrates the robustness of multi-step inference.
We explored the use of multi-step inference in natural language inference by proposing a stochastic answer network (SAN). Rather than directly predicting the results (e.g. relation such as entailment or not) given the input premise and hypothesis , SAN maintains a state , which it iteratively refines over multiple passes on and in order to make a prediction. Our state-of-the-art results on four benchmarks (SNLI, MultiNLI, SciTail, Quora Question Pairs) show the effectiveness of this multi-step inference architecture. In future, we would like to incorporate the pertrained contextual embedding, e.g., ELMo Peters et al. (2018) and GPT Radford et al. (2018) into our model and multi-task learning Liu et al. (2019).
Bowman et al. (2015)
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning.
A large annotated corpus for learning natural language inference.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642.
- Chen et al. (2017) Q. Chen, X. Zhu, Z.-H. Ling, D. Inkpen, and S. Wei. 2017. Natural Language Inference with External Knowledge. ArXiv e-prints.
- Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
- Dhingra et al. (2016) Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549.
Gal and Ghahramani (2016)
Yarin Gal and Zoubin Ghahramani. 2016.
A theoretically grounded application of dropout in recurrent neural networks.In Advances in neural information processing systems, pages 1019–1027.
- Gong et al. (2017) Y. Gong, H. Luo, and J. Zhang. 2017. Natural Language Inference over Interaction Space. ArXiv e-prints.
- Goodfellow et al. (2013) Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. Maxout networks. arXiv preprint arXiv:1302.4389.
- Hill et al. (2016) Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. ICLR.
- Khot et al. (2018) Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI.
- Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Kumar et al. (2015) Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285.
- Liu et al. (2019) Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504.
- Liu et al. (2018) Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for machine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
- Mou et al. (2015) Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2015. Natural language inference by tree-based convolution and heuristic matching. arXiv preprint arXiv:1512.08422.
- Nangia et al. (2017) N. Nangia, A. Williams, A. Lazaridou, and S. R. Bowman. 2017. The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations. ArXiv e-prints.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.
- Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
- Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
- Salimans and Kingma (2016) Tim Salimans and Diederik P Kingma. 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901–909.
- Shen et al. (2017) Yelong Shen, Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2017. An empirical analysis of multiple-turn reasoning strategies in reading comprehension tasks. arXiv preprint arXiv:1711.03230.
- Sordoni et al. (2016) Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245.
Srivastava et al. (2014)
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and
Ruslan Salakhutdinov. 2014.
Dropout: a simple way to prevent neural networks from overfitting.
Journal of Machine Learning Research, 15(1):1929–1958.
- Tomar et al. (2017) Gaurav Singh Tomar, Thyago Duque, Oscar Täckström, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural paraphrase identification of questions with noisy pretraining. arXiv preprint arXiv:1704.04565.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
- Wang et al. (2017) Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814.
- Williams et al. (2017) A. Williams, N. Nangia, and S. R. Bowman. 2017. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. ArXiv e-prints.
- Xu et al. (2018) Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, and Jianfeng Gao. 2018. Multi-task learning for machine reading comprehension. arXiv preprint arXiv:1809.06963.