Investigating Biases in Textual Entailment Datasets

06/23/2019 ∙ by Shawn Tan, et al. ∙ 0

The ability to understand logical relationships between sentences is an important task in language understanding. To aid in progress for this task, researchers have collected datasets for machine learning and evaluation of current systems. However, like in the crowdsourced Visual Question Answering (VQA) task, some biases in the data inevitably occur. In our experiments, we find that performing classification on just the hypotheses on the SNLI dataset yields an accuracy of 64 MultiNLI dataset, discuss its implication, and propose a simple method to reduce the biases in the datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Natural Language Inference (NLI) is an important task for natural language understanding MacCartney and Manning (2009). It involves discerning if a natural language sentence can reasonably be inferred from an originating sentence . To this end, several datasets have been collected for the evaluation of a system’s ability to detect such relationships between sentences Marelli et al. (2014); Young et al. (2014); Bowman et al. (2015); Williams et al. (2017). These datasets evaluate models for the task of Recognizing Textual Entailment (RTE), and Bowman et al. (2015) introduced the Standford Natural Language Inference (SNLI) dataset, a much larger dataset than before, boasting 500K examples that are crowdsourced with specific constraints. Since its introduction, there have been numerous proposals for different models to perform this task Chen et al. (2017); Gong et al. (2017). Later, a dataset for RTE over a broader set of domains was introduced in Williams et al. (2017), the MultiNLI dataset.

Recently, though, in the Visual Question Answering (VQA) dataset Antol et al. (2015), biases due to human predispositions when generating related questions for images were found. As an example, one can attain a 68% accuracy when answering “yes” to all binary questions in VQA Zhang et al. (2016). This is not only a problem during evaluation, but also results in statistical learning algorithms picking up superficial correlations in the training set, if such biases exist there as well.

Do the SNLI and MultiNLI datasets contain the same type of human biases? If they do, do current state-of-the-art models for RTE rely too heavily on them, and are there ways to modify the current dataset to correct for it? In this paper, we set out to analyse SNLI and MultiNLI, specifically looking for signs of similar types of biases introduced through the data collection mechanism. We also propose a simple heuristic to try to correct for correlations in superficial aspects of the data, hoping to stir discussion and inspire future work in this direction.

2 Related Work

In the SNLI dataset Bowman et al. (2015), Amazon Mechanical Turk was used to crowdsource data collection. In each task, a worker was presented with a premise , and asked to write three hypotheses: contradictory, entailing and neutral sentences. The premises were obtained from the Flickr30k corpus Young et al. (2014), and contained 160K captions. Additionally, there was a validation step to ensure that four other workers agreed that the written sentence corresponded to the label. Similarly, the VQA dataset Antol et al. (2015) also crowdsourced questions from Amazon Mechanical Turk. Workers were asked to provide questions given an image that they believed a “smart robot” would have trouble answering. However, Zhang et al. (2016) revealed problems with the VQA dataset related to biases in the questions, including as discussed in the introduction, a bias toward affirmative answers to yes/no questions. Zhang et al. (2016) suggest a solution to the affirmation bias by using crowdsourced clipart to generate a dataset where every question has two complementary scenes with opposite answers, effectively “debiasing” the dataset. Goyal et al. (2017) has a similar goal, but instead of generating synthetic images, it attempts to identify another image that results in a different answer. This effort was again made possible by additional reliance on crowdsourcing. Another way to sidestep the problem of biased training and test sets is to incorporate debiasing directly into the model. For example, Agrawal et al. (2017) adapted the design of the architecture of the model explicitly to avoid learning the data bias.

Gururangan et al. (2018) and Poliak et al. (2018) also independently discover such biases in the dataset. Gururangan et al. (2018) categorized the test set into different levels of difficulty that help evaluate the performance of the model, and Poliak et al. (2018) emphasized that the statistical irregularity in the hypothesis alone allows the model to achieve NLI without actual pairwise reasoning. In our work, we reproduce the hypothesis-only results on SNLI, and also try to analyse the dependence on the hypothesis for a model trained for the RTE task. We also perform a bigram analysis on the training and test set, and propose a simple way to prune the training set based on the bigram distribution.

3 Analysis

3.1 Classification on Hypothesis Only

Dataset -only
SNLI 64%
MultiNLI 51%
Table 1: Results from using only the hypothesis for classification.

In an effort to probe the bias within SNLI and MultiNLI, we attempt to trained a textual entailment classifier to predict the

contradictory, entailing and neutral labels from only the hypothesis

. Intuitively, this should result in almost equal probabilities for each class (assuming balanced classes), for without a premise for comparison, above chance performance should not be possible. However, a simple RNN classifier (which we refer to as the

-only model) results in a 64% accuracy on the test set, nearly two times higher than a baseline chance prediction 111The same test was not carried out for premise-labels because there are (approximately) balanced triplets of labels for each premise. Thus, by construction, there should be little or no bias of this type for the premise.. Poliak et al. (2018) further investigates this issue with a more comprehensive study over a wider range of corpora. This suggests that there are correlations that exist in the training set that can be exploited during test time. We will further discuss the implications of this in Section 5.

MultiNLI has multiple genres of data (Fiction, Telephone, Travel, etc.) and they split their development set into two: the matched development set consists of test examples that come from 5 of the genres that are also seen in the training set, while the mismatched development set contains examples from unseen genres. Running the same experiment on MultiNLI, the same hypothesis-only classfier achieves a 51% accuracy on the mismatched dataset. This may be because the MultiNLI dataset has less superficial correlations that the classifier is able exploit.

3.2 Testing Hypothesis Dependence for NLI Models


Model Type
RTE Permuted
ESIM 88% 40.5%
LSTM 70% 50%
Table 2: Results from permuting premises. LSTM refers to the sentence-embedding method that use LSTM cell as encoder.

As one of the reasons for the NLI task was for the learning of sentence representations, we also trained an LSTM sentence-embedding encoder. The idea was to compare the performance between a model that uses a fixed-length sentence embedding and one that tries to model interactions between hidden states of an RNN (ESIM and DIIN fit into this category). Because sentence embedding models do not force the ‘interaction’ between the two inputs, we believe that the sentence embedding models may be more prone to learning these superficial correlations.

The experiment attempts to test sentence-embedding models for their reliance on the hypothesis for classification. During testing, we shuffle the premises so that they do not correspond to the right hypotheses. The sentence-embedding models that we trained achieved 70% accuracy when trained on the full dataset while under the shuffled premise test, they achieved an accuracy of 50%. In comparison, the ESIM model achieved a 40.5% accuracy in this setting. This suggests that the model still uses some of the correlations found in the hypothesis, otherwise this experiment should result in an 33% accuracy. The results hint that a sentence embedding model has a stronger reliance on the hypothesis and, therefore, the biases in the dataset.

3.3 Bigrams

Figure 1: The top most informative bigrams in the SNLI dataset. Red represents proportion of contradiction labels, Blue for neutral, and Green for entailment. Numbers on the bars represent the proportion of the bigram in the dataset (A bar labeled with 0.5 means that portion of the bigram constitutes half of that partition of the dataset).
Figure 2: The top most informative bigrams in the MultiNLI dataset. The color legend is identical to Fig. 1

We analyze the most informative bigrams that are in the SNLI training set. Specifically, we count the bigrams in each class , and calculate for each that occurs more than times, then applying Laplace smoothing with to the counts before normalizing by the total counts. We then rank them in order of increasing entropy, The distributions with least entropy are shown in Figure 1 for SNLI and 2 for MultiNLI. This is then compared to their proportions seen in the test set, in order to get an idea of the frequency of their occurrences in both partitions.

In the test set, their ratios across classes appears to be relatively similar to the training set. But because of the size of the test set in comparison to the training set (50 times smaller), and coupled with smoothing, the distributions are more uniform. For SNLI, we find that the informative bigrams make up the long-tail of the bigram distributions, but many of them are predictive of the labels. MultiNLI also has many low frequency bigrams that are preferentially predictive of contradiction. These bigrams tend to correspond to negative notions (e.g. never, no, nothing

). In comparison with SNLI, the odds of the highest information bigram in SNLI,

nobody is, predicts for contradictions 222:1. For MultiNLI, and never predicts for contradiction 8:1.

Label Examples
Contradiction P Black man in a nice suite that matches the rest of the choir he’s singing with near a piano.
H nobody is singing
Neutral P An excited, smiling woman stands at a red railing as she holds a boombox to one side.
H A tall human stanindg.
Entailment P A group of people are walking across the street.
H some humans walking
Table 3: Examples of top bigram occurrences for each label in SNLI.

Picking examples that contain these bigrams from SNLI, we can understand why they were repeatedly used to generate hypotheses for those classes (Table 3). The most informative bigram, nobody is/has was often used when the premise describes someone performing a task. The turker simply has to substitute “nobody” into the sentence in order to make the sentence a contradiction. The bigram tall human was used to inject an additional detail in the sentence, while at the same time being less detailed about the person in question, resulting in a neutral hypothesis. To create an entailment sentence, using some humans resulted in a sentence that could be entailed from the premise, but removed details about what type of human it was. We also notice that there are fewer bigrams that are preferential to entailments, in both SNLI and MultiNLI. One simple reason for this is that one just needs to remove details from the premise, instead of adding extra information, in order to generate an entailed sentence. Thus, it is relatively easy to construct entailed sentences without incurring significant bias.

4 Correcting SNLI via dataset pruning

If we know that the probability on all the classes should be almost equal given only , then ideally each should have an equal number of pairings with every class. In an attempt to reduce the bias of SNLI, we prune the training dataset using the features of the hypothesis. Pruning the dataset to balance the feature occurrence should result in a distribution shift between the train and test set. If the model has learned to do logical inference, the bias in the test set should make relatively little difference.

4.1 Greedy Pruning

1 function PruneDataset
Input : The original dataset
Input : Proportion of dataset to prune
Output : 
2 ;
3 ;
4 for  do
5       ;
6       ;
7       ;
8      
9 end for
Algorithm 1 The classifier greedy removal algorithm.

In our approach to re-balancing the training dataset, we rely on iteratively retraining a simple classifier. Since we know that bigrams in the hypothesis are predictive of the labels, we use bigrams as features for a Naive Bayes classifier.

Every time we remove an instance from the dataset, the most informative features may change (the frequencies of other bigrams present in that instance are affected). If we remove data instances without taking this shift into account, a new set of instances would become the most informative. To deal with this, a classifier should be retrained for every iteration of the pruning. The reason Naive Bayes was used for pruning was because it was easy to retrain to optimality given the original dataset by simply subtracting the counts.

Using the predictions of the classifier on the the training set, we score the instances in the dataset by their cross-entropy. We then remove the instance with the lowest cross entropy and update the classifier accordingly.

Our goal is to ensure that the distribution of classes for each bigram is balanced. However, since each instance contains several bigrams, and we want to remove as few instances as possible (to maximize diversity), we score each instance with how predictive the bag of bigrams are together. A Naive Bayes model was chosen because it was easy to update the classifier at every iteration by subtracting bigram counts from the model.

Figure 3: The top most informative bigrams in the Pruned SNLI dataset.

Algorithm 1 lists the pseudo-code of the method. Figure 3 shows the most informative bigrams on the pruned version of the dataset. As compared to the uncorrected SNLI, the top 8 most informative bigrams are less predictive of the class label.

Method only Train Test
Original 64% 93% 88%
Random 59% 94% 87%
Greedy 56% 81% 85%
Table 4: Results from training on the RTE task. only uses only the hypothesis for classification. Train and Test are the results from training the ESIM model (Chen et al., 2017) on the various datasets. Random refers to the dataset in which we uniformly remove 20% of the dataset, and Greedy refers to using our greedy pruning method to remove 20% of the instances from the dataset.

We perform the RTE task using our -only model on hypotheses alone, and the ESIM model on the hypothesis-premise pairs. The ESIM model was used in this analysis because it is one of the models with state-of-the-art results, and the ease of working with the code-base.

To measure how the pruning of the training set affects the classification task, we compare training on the pruned dataset with training on the full, original dataset, and a uniformly randomly pruned dataset as a control to calibrate the effect of smaller trianing set size on generalisation. We refer to these as the Original and Random strategies respectively, and the strategy we propose as Greedy. The result is presented in Table 4.

Interestingly, using the Random strategy, the model performs the same on the RTE tasks. However, running it on the -only classifier resulted in a lower accuracy. It is possible that sufficient numbers of the label-predictive bigrams were removed that the classifier is less able to exploit these for classification. More surprisingly, our removal method, while resulting in a 3% drop on the test set, also results in a lower accuracy on the training set. We believe this is due to the training set becoming a much harder dataset on which to train, with fewer statistical correlations between hypothesis and label. Also, higher performance on hypothesis alone correlates with higher performance on both hypothesis and premise. This indicates the reported measurements on performance of the state-of-the-art models are overestimated since the class label should be marginally independent of any single sentence alone.

5 Discussion & Conclusion

The NLI datasets were created in order to train models that learn to perform RTE, with the intention of learning good semantic representations for the task. In this paper, we present the biases present in the data, and how they are similar in both the training and test set. Most statistical learning algorithms will exploit available superficial correlations, and then be evaluated on the test set that is similarly biased. This results in a score that may not be representative of how well the field is advancing towards true RTE performance. There are two key takeaways we would like to emphasise:

Train / test split with different distributions for proper benchmarks

If the partition is made such that the distributions between train and test are different, any unwanted correlations between the hypotheses and labels in the training set cannot be exploited during testing. This effectively prevents the information about the test set from ‘leaking’ into the training data. What this means is that in order to have a score that reflects the state of the art in the task, we should have differently biased train and test sets.

Conditional independence of the label and hypothesis

Without the premise, the label should be conditionally independent of the hypothesis, and a model that performs RTE should manifest this behaviour. One way to achieve this is to ensure that the dataset reflect the true dependence of the textual entailment labels on the relationship between premise and hypothesis, not on a set of marginal features of the hypothesis. Alternative methods are possible, including losses that enforce conditional independence in the model.

In this paper, we proposed a simple method based on bigrams. In pruning the training set and keeping the test set the same, we are attempting to change the distribution in the train and test partitions, and reduce marginal features of the hypothesis so that the learning algorithm does not exploit the superficial correlations. These properties should be kept in mind when training a model on a dataset, or when assessing collected data that is being curated for a dataset.

From our analysis, we believe MultiNLI to have fewer issues with bias compared to SNLI. If SNLI is still preferred, some preprocessing should be performed in order to account for the problems we mentioned. We hope that the issues we have raised will help researchers to better diagnose and analyse their results.

References