Natural Language Inference (NLI) is an important task for natural language understanding MacCartney and Manning (2009). It involves discerning if a natural language sentence can reasonably be inferred from an originating sentence . To this end, several datasets have been collected for the evaluation of a system’s ability to detect such relationships between sentences Marelli et al. (2014); Young et al. (2014); Bowman et al. (2015); Williams et al. (2017). These datasets evaluate models for the task of Recognizing Textual Entailment (RTE), and Bowman et al. (2015) introduced the Standford Natural Language Inference (SNLI) dataset, a much larger dataset than before, boasting 500K examples that are crowdsourced with specific constraints. Since its introduction, there have been numerous proposals for different models to perform this task Chen et al. (2017); Gong et al. (2017). Later, a dataset for RTE over a broader set of domains was introduced in Williams et al. (2017), the MultiNLI dataset.
Recently, though, in the Visual Question Answering (VQA) dataset Antol et al. (2015), biases due to human predispositions when generating related questions for images were found. As an example, one can attain a 68% accuracy when answering “yes” to all binary questions in VQA Zhang et al. (2016). This is not only a problem during evaluation, but also results in statistical learning algorithms picking up superficial correlations in the training set, if such biases exist there as well.
Do the SNLI and MultiNLI datasets contain the same type of human biases? If they do, do current state-of-the-art models for RTE rely too heavily on them, and are there ways to modify the current dataset to correct for it? In this paper, we set out to analyse SNLI and MultiNLI, specifically looking for signs of similar types of biases introduced through the data collection mechanism. We also propose a simple heuristic to try to correct for correlations in superficial aspects of the data, hoping to stir discussion and inspire future work in this direction.
2 Related Work
In the SNLI dataset Bowman et al. (2015), Amazon Mechanical Turk was used to crowdsource data collection. In each task, a worker was presented with a premise , and asked to write three hypotheses: contradictory, entailing and neutral sentences. The premises were obtained from the Flickr30k corpus Young et al. (2014), and contained 160K captions. Additionally, there was a validation step to ensure that four other workers agreed that the written sentence corresponded to the label. Similarly, the VQA dataset Antol et al. (2015) also crowdsourced questions from Amazon Mechanical Turk. Workers were asked to provide questions given an image that they believed a “smart robot” would have trouble answering. However, Zhang et al. (2016) revealed problems with the VQA dataset related to biases in the questions, including as discussed in the introduction, a bias toward affirmative answers to yes/no questions. Zhang et al. (2016) suggest a solution to the affirmation bias by using crowdsourced clipart to generate a dataset where every question has two complementary scenes with opposite answers, effectively “debiasing” the dataset. Goyal et al. (2017) has a similar goal, but instead of generating synthetic images, it attempts to identify another image that results in a different answer. This effort was again made possible by additional reliance on crowdsourcing. Another way to sidestep the problem of biased training and test sets is to incorporate debiasing directly into the model. For example, Agrawal et al. (2017) adapted the design of the architecture of the model explicitly to avoid learning the data bias.
Gururangan et al. (2018) and Poliak et al. (2018) also independently discover such biases in the dataset. Gururangan et al. (2018) categorized the test set into different levels of difficulty that help evaluate the performance of the model, and Poliak et al. (2018) emphasized that the statistical irregularity in the hypothesis alone allows the model to achieve NLI without actual pairwise reasoning. In our work, we reproduce the hypothesis-only results on SNLI, and also try to analyse the dependence on the hypothesis for a model trained for the RTE task. We also perform a bigram analysis on the training and test set, and propose a simple way to prune the training set based on the bigram distribution.
3.1 Classification on Hypothesis Only
In an effort to probe the bias within SNLI and MultiNLI, we attempt to trained a textual entailment classifier to predict thecontradictory, entailing and neutral labels from only the hypothesis
. Intuitively, this should result in almost equal probabilities for each class (assuming balanced classes), for without a premise for comparison, above chance performance should not be possible. However, a simple RNN classifier (which we refer to as the-only model) results in a 64% accuracy on the test set, nearly two times higher than a baseline chance prediction 111The same test was not carried out for premise-labels because there are (approximately) balanced triplets of labels for each premise. Thus, by construction, there should be little or no bias of this type for the premise.. Poliak et al. (2018) further investigates this issue with a more comprehensive study over a wider range of corpora. This suggests that there are correlations that exist in the training set that can be exploited during test time. We will further discuss the implications of this in Section 5.
MultiNLI has multiple genres of data (Fiction, Telephone, Travel, etc.) and they split their development set into two: the matched development set consists of test examples that come from 5 of the genres that are also seen in the training set, while the mismatched development set contains examples from unseen genres. Running the same experiment on MultiNLI, the same hypothesis-only classfier achieves a 51% accuracy on the mismatched dataset. This may be because the MultiNLI dataset has less superficial correlations that the classifier is able exploit.
3.2 Testing Hypothesis Dependence for NLI Models
As one of the reasons for the NLI task was for the learning of sentence representations, we also trained an LSTM sentence-embedding encoder. The idea was to compare the performance between a model that uses a fixed-length sentence embedding and one that tries to model interactions between hidden states of an RNN (ESIM and DIIN fit into this category). Because sentence embedding models do not force the ‘interaction’ between the two inputs, we believe that the sentence embedding models may be more prone to learning these superficial correlations.
The experiment attempts to test sentence-embedding models for their reliance on the hypothesis for classification. During testing, we shuffle the premises so that they do not correspond to the right hypotheses. The sentence-embedding models that we trained achieved 70% accuracy when trained on the full dataset while under the shuffled premise test, they achieved an accuracy of 50%. In comparison, the ESIM model achieved a 40.5% accuracy in this setting. This suggests that the model still uses some of the correlations found in the hypothesis, otherwise this experiment should result in an 33% accuracy. The results hint that a sentence embedding model has a stronger reliance on the hypothesis and, therefore, the biases in the dataset.
We analyze the most informative bigrams that are in the SNLI training set. Specifically, we count the bigrams in each class , and calculate for each that occurs more than times, then applying Laplace smoothing with to the counts before normalizing by the total counts. We then rank them in order of increasing entropy, The distributions with least entropy are shown in Figure 1 for SNLI and 2 for MultiNLI. This is then compared to their proportions seen in the test set, in order to get an idea of the frequency of their occurrences in both partitions.
In the test set, their ratios across classes appears to be relatively similar to the training set. But because of the size of the test set in comparison to the training set (50 times smaller), and coupled with smoothing, the distributions are more uniform. For SNLI, we find that the informative bigrams make up the long-tail of the bigram distributions, but many of them are predictive of the labels. MultiNLI also has many low frequency bigrams that are preferentially predictive of contradiction. These bigrams tend to correspond to negative notions (e.g. never, no, nothing
). In comparison with SNLI, the odds of the highest information bigram in SNLI,nobody is, predicts for contradictions 222:1. For MultiNLI, and never predicts for contradiction 8:1.
|Contradiction||P||Black man in a nice suite that matches the rest of the choir he’s singing with near a piano.|
|H||nobody is singing|
|Neutral||P||An excited, smiling woman stands at a red railing as she holds a boombox to one side.|
|H||A tall human stanindg.|
|Entailment||P||A group of people are walking across the street.|
|H||some humans walking|
Picking examples that contain these bigrams from SNLI, we can understand why they were repeatedly used to generate hypotheses for those classes (Table 3). The most informative bigram, nobody is/has was often used when the premise describes someone performing a task. The turker simply has to substitute “nobody” into the sentence in order to make the sentence a contradiction. The bigram tall human was used to inject an additional detail in the sentence, while at the same time being less detailed about the person in question, resulting in a neutral hypothesis. To create an entailment sentence, using some humans resulted in a sentence that could be entailed from the premise, but removed details about what type of human it was. We also notice that there are fewer bigrams that are preferential to entailments, in both SNLI and MultiNLI. One simple reason for this is that one just needs to remove details from the premise, instead of adding extra information, in order to generate an entailed sentence. Thus, it is relatively easy to construct entailed sentences without incurring significant bias.
4 Correcting SNLI via dataset pruning
If we know that the probability on all the classes should be almost equal given only , then ideally each should have an equal number of pairings with every class. In an attempt to reduce the bias of SNLI, we prune the training dataset using the features of the hypothesis. Pruning the dataset to balance the feature occurrence should result in a distribution shift between the train and test set. If the model has learned to do logical inference, the bias in the test set should make relatively little difference.
4.1 Greedy Pruning
In our approach to re-balancing the training dataset, we rely on iteratively retraining a simple classifier. Since we know that bigrams in the hypothesis are predictive of the labels, we use bigrams as features for a Naive Bayes classifier.
Every time we remove an instance from the dataset, the most informative features may change (the frequencies of other bigrams present in that instance are affected). If we remove data instances without taking this shift into account, a new set of instances would become the most informative. To deal with this, a classifier should be retrained for every iteration of the pruning. The reason Naive Bayes was used for pruning was because it was easy to retrain to optimality given the original dataset by simply subtracting the counts.
Using the predictions of the classifier on the the training set, we score the instances in the dataset by their cross-entropy. We then remove the instance with the lowest cross entropy and update the classifier accordingly.
Our goal is to ensure that the distribution of classes for each bigram is balanced. However, since each instance contains several bigrams, and we want to remove as few instances as possible (to maximize diversity), we score each instance with how predictive the bag of bigrams are together. A Naive Bayes model was chosen because it was easy to update the classifier at every iteration by subtracting bigram counts from the model.
Algorithm 1 lists the pseudo-code of the method. Figure 3 shows the most informative bigrams on the pruned version of the dataset. As compared to the uncorrected SNLI, the top 8 most informative bigrams are less predictive of the class label.
We perform the RTE task using our -only model on hypotheses alone, and the ESIM model on the hypothesis-premise pairs. The ESIM model was used in this analysis because it is one of the models with state-of-the-art results, and the ease of working with the code-base.
To measure how the pruning of the training set affects the classification task, we compare training on the pruned dataset with training on the full, original dataset, and a uniformly randomly pruned dataset as a control to calibrate the effect of smaller trianing set size on generalisation. We refer to these as the Original and Random strategies respectively, and the strategy we propose as Greedy. The result is presented in Table 4.
Interestingly, using the Random strategy, the model performs the same on the RTE tasks. However, running it on the -only classifier resulted in a lower accuracy. It is possible that sufficient numbers of the label-predictive bigrams were removed that the classifier is less able to exploit these for classification. More surprisingly, our removal method, while resulting in a 3% drop on the test set, also results in a lower accuracy on the training set. We believe this is due to the training set becoming a much harder dataset on which to train, with fewer statistical correlations between hypothesis and label. Also, higher performance on hypothesis alone correlates with higher performance on both hypothesis and premise. This indicates the reported measurements on performance of the state-of-the-art models are overestimated since the class label should be marginally independent of any single sentence alone.
5 Discussion & Conclusion
The NLI datasets were created in order to train models that learn to perform RTE, with the intention of learning good semantic representations for the task. In this paper, we present the biases present in the data, and how they are similar in both the training and test set. Most statistical learning algorithms will exploit available superficial correlations, and then be evaluated on the test set that is similarly biased. This results in a score that may not be representative of how well the field is advancing towards true RTE performance. There are two key takeaways we would like to emphasise:
Train / test split with different distributions for proper benchmarks
If the partition is made such that the distributions between train and test are different, any unwanted correlations between the hypotheses and labels in the training set cannot be exploited during testing. This effectively prevents the information about the test set from ‘leaking’ into the training data. What this means is that in order to have a score that reflects the state of the art in the task, we should have differently biased train and test sets.
Conditional independence of the label and hypothesis
Without the premise, the label should be conditionally independent of the hypothesis, and a model that performs RTE should manifest this behaviour. One way to achieve this is to ensure that the dataset reflect the true dependence of the textual entailment labels on the relationship between premise and hypothesis, not on a set of marginal features of the hypothesis. Alternative methods are possible, including losses that enforce conditional independence in the model.
In this paper, we proposed a simple method based on bigrams. In pruning the training set and keeping the test set the same, we are attempting to change the distribution in the train and test partitions, and reduce marginal features of the hypothesis so that the learning algorithm does not exploit the superficial correlations. These properties should be kept in mind when training a model on a dataset, or when assessing collected data that is being curated for a dataset.
From our analysis, we believe MultiNLI to have fewer issues with bias compared to SNLI. If SNLI is still preferred, some preprocessing should be performed in order to account for the problems we mentioned. We hope that the issues we have raised will help researchers to better diagnose and analyse their results.
- Agrawal et al. (2017) Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2017. Don’t just assume; look and answer: Overcoming priors for visual question answering. arXiv preprint arXiv:1712.00377.
Antol et al. (2015)
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra,
C Lawrence Zitnick, and Devi Parikh. 2015.
Vqa: Visual question answering.
Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433.
Bowman et al. (2015)
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning.
A large annotated corpus for learning natural language inference.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
- Chen et al. (2017) Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1657–1668.
- Gong et al. (2017) Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. arXiv preprint arXiv:1709.04348.
- Goyal et al. (2017) Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 9.
- Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324.
- MacCartney and Manning (2009) Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics, pages 140–156. Association for Computational Linguistics.
- Marelli et al. (2014) Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC, pages 216–223.
- Poliak et al. (2018) Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. arXiv preprint arXiv:1805.01042.
- Williams et al. (2017) Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.
- Young et al. (2014) Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78.
Zhang et al. (2016)
Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh.
Yin and yang: Balancing and answering binary visual questions.
Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pages 5014–5022. IEEE.