In this work, we focus on investigating and reducing biases in the task of Natural Language Inference (NLI), where the target of the model is to classify the relations between a pair of sentences into three categories: entailment, neutral and contradiction. With the release of large-scale standard datasetsBowman et al. (2015); Williams et al. (2018), significant success has been made on this task, and recent state-of-the-art neural models have already reached competitive performance even compared to humans. However, a number of papers Gururangan et al. (2018); Poliak et al. (2018); Nie et al. (2019); Naik et al. (2018) have shown that despite the high accuracy on these datasets, these models are far from mastering the required nature of natural language inference. Instead of deeply understanding the sentences in the correct semantic way, these models tend to exploit shortcuts or annotation artifacts in the dataset and actually overfit to these datasets to predict the label using simple patterns. However, most shortcuts are only valid within the datasets and fail to hold for general natural language. Hence, these models fail to generalize to other datasets for the same task Talman and Chatzikyriakidis (2019), perform badly on challenge analysis datasets Glockner et al. (2018); McCoy et al. (2019); Wang et al. (2019b), and are fooled by adversarial attacks Naik et al. (2018).
One major cause of this problem is the existence of dataset biases. Since most NLP datasets are often collected and processed by crowdworkers, bias can be added to the data at every step of data collection. For example, when writing contradiction pairs, workers are likely to use negation words such as ‘not’, and when creating entailment pairs, workers are likely to keep most of the words in the premise sentence. This results in ‘annotation artifacts’ in the dataset Gururangan et al. (2018). In reality, almost every dataset contains countless such diverse biases. In our paper, we focus on the Multi-Genre Natural Language Inference (MNLI) dataset Williams et al. (2018) in English, and on two specific kinds of dataset bias:
Contradiction Word Bias (CWB): If the hypothesis sentence contains some specific words (such as negation words) that are always used by the crowd-workers to generate contradiction pairs, then the sentence pair is very likely to be contradiction.
Word Overlapping Bias (WOB): If the premise sentence and the hypothesis sentence have a high word-overlap, then the sentence pair is very likely to be entailment.
These two types of biases are selected as the focus of our experiments because: (1) there exist a significant number of samples in the dataset where they are a major problem; (2) they are conceptually easy to understand and relatively easier to evaluate. In our experiments, we not only used current existing evaluation datasets from Naik et al. (2018), but also extracted balanced evaluation datasets from the original data to evaluate these two biases. Although we only focus on these two kinds of dataset biases throughout our experiments, our methods are not specifically designed for these two biases and should be able to reduce other similar lexical biases simultaneously.
Using these two example lexical biases, our paper discusses the following three questions:
Is lexical bias a problem that can be solved by only balancing the dataset?
Can the lexical bias problem be solved using existing ideas from the gender bias problem?
What are some promising new modeling directions towards reducing lexical biases?
As responses to these three questions, we conduct three lines of experiments. Firstly, we expand the discussion of Q1 by studying whether and how the bias can be reduced by debiasing the dataset. For this, we add new training data which does not follow the bias pattern. This new data can come from two sources, either from the original training set or via manually generated synthetic data. We show that both methods can slightly reduce the model’s bias. However, even after adding a large amount of additional data, the model still cannot be completely bias-free. Another critical problem with these data augmentation/enhancement based debiasing methods is that we need to know the specific behaviour of the biases before making some related changes to the dataset. However, in reality, models are always faced with new training datasets containing unknown and inseparable biases. Hence, the answer to Q1 is mostly negative for simple data-level approaches and we also need to focus on designing direct model-debiasing methods.
Therefore, we turn our focus to directly debiasing the model (Q2 and Q3). The first method is to debias the model at the lower level, i.e., by directly debiasing the embeddings so that they do not show strong biases toward any specific label. This is one of the most prevalent methods for reducing gender biases, so through the examination of this idea, we aim to compare lexical bias problems to gender bias problems and highlight its uniqueness (hence answering Q2). Finally, we debias the model at the higher level, i.e., by designing another bag-of-words (BoW) sub-model to capture the biased representation, and then preventing the primary model from using the highly-biased lexical features by forcing orthogonality between the main model and the BoW model (via HEX projection Wang et al. (2019a)). In our experiments, we show that debiasing the prediction part of the model at higher levels using BoW-orthogonality is more effective towards reducing lexical biases than debiasing the model’s low-level components (embeddings). This approach can significantly robustify the model while maintaining its overall performance, hence providing a response to Q3. We also present qualitative visualizations using LIME-analysis for the important features before and after applying the BoW-orthogonality projection.
2 Related Work
Problems with NLI Models and Datasets. Despite the seemingly impressive improvements in NLI tasks, recently a number of papers revealed different problems with these models. Gururangan et al. (2018) showed that annotation artifacts in the datasets are exploited by neural models to get high accuracy without understanding the sentence. Poliak et al. (2018) showed a similar phenomenon by showing models getting good performance but only taking one sentence as the input. Nie et al. (2019) showed that NLI models achieved high accuracy by word/phrase level matching instead of learning the compositionality. Naik et al. (2018) constructed bias-revealing datasets by modifying the development set of MNLI. In our evaluation, besides using the datasets from Naik et al. (2018), we also extract new datasets from the original MNLI dataset to maintain the consistency of input text distribution.
Adversarial Removal Methods. Adversarial removal techniques are used to control the content of representations. They were first used to do unsupervised domain adaptation in Ganin and Lempitsky (2015). Xie et al. (2017) later generalized this approach to control specific information learned by the representation. Li et al. (2018) used a similar approach to learn privacy-preserving representations. However, Elazar and Goldberg (2018) showed that such adversarial approach fails to completely remove demographic information. Minervini and Riedel (2018) generate adversarial examples and regularize models based on first-order logic rules. Belinkov et al. (2019, 2019) showed that adversarial removal methods can be effective for the hypothesis-only NLI bias. Our focus is on two different lexical biases and our results are complementary to theirs.222We have tried a similar approach via gradient reversal w.r.t. BoW sub-model in preliminary experiments and observed less effectiveness (than HEX-projection), which hints that different types of biases can lead to different behaviors. Recently, Wang et al. (2019a) proposed HEX projection to force the orthogonality between the target model and a superficial model to improve domain generalization for image classification tasks. Here, to make the model less lexically biased, we apply the HEX projection with specially-designed NLP model architectures to regularize the representation in our models. Even more recently, Clark et al. (2019) and He et al. (2019) propose to robustify the task model with the help of an additional simple model, using ensembling to encourage cooperation of the two models. On the other hand, our main motivation to compare the advantages/limitations of dataset vs. embedding vs. classifier debiasing methods (against two different types of problematic lexical biases in NLI), and also our classifier debiasing method forces the task model to capture orthogonal information via HEX projection.
Removing Gender Bias in NLP Models. There is also a line of work in NLP on analyzing and reducing gender bias in NLP models. Bolukbasi et al. (2016); Caliskan et al. (2017); Zhao et al. (2018a) studied the bias problem in word embeddings. Zhao et al. (2017) reduced gender bias in visual recognition using corpus-level constraints. Zhao et al. (2018b) discussed the gender bias problem in co-reference resolution. These problems are related to our work, but lexical biases are more complex. Multiple inseparable lexical dataset biases can influence one single example and the same word can have different lexical biases in different contexts. Later in our experiments, we show that these two problems behave differently and we present the need for different solutions.
3 Data-Level Debiasing
Models naturally learn the biases from the dataset they are trained on. Therefore, as we mentioned in Q1 in Sec. 1, one may first wonder if lexical bias can be completely removed by fixing the source of the bias, i.e., datasets. While collecting large-scale datasets Bowman et al. (2015); Williams et al. (2018) already takes a lot of time and effort, collecting bias-free datasets is even more time-consuming and hard to control. Therefore, here we focus on getting additional data from currently-available resources. We conducted experiments using two resources of data. The first one is to do ‘data enhancement’ by repeating samples in the original training data. The second source is ‘data augmentation’ by manually creating synthetic data. We follow the construction of existing synthetic bias-revealing datasets to create new samples for the training set so that these targeted biases can be reduced.
Data Enhancement by Repeating Training Data. For most kinds of biases, there still exists a small portion of samples that don’t follow the bias. Therefore, we reduce biases in datasets by repeating this portion of samples. For CWB, we select non-contradiction samples containing contradiction words (details see Sec. 5.1) in the hypothesis sentence but not in the premise sentence. For the WOB, we select non-entailment samples with highest word overlapping (measured by the Jaccard distance Hamers et al. (1989) of words). Next, since the number of these unbiased samples may not be large enough, we repeatedly add those selected samples to make the training set more balanced. The results from adding 500 new samples to 50,000 new samples are shown in Sec. 6.1.
Data Augmentation by Adding Synthetic Data. Researchers have been using synthetic rules to generate harder or perturbed samples to fool the model. Here, besides using these datasets only as the evaluation set, we also add these samples back to the training set, similar to the concept of adversarial training Jia and Liang (2017); Wang et al. (2019c); Niu and Bansal (2018) where the adversarial examples are added back to the training set so that the resulting model will be more robust to similar adversarial attacks. In our experiments, we follow Naik et al. (2018) to append meaningless sentences at the end of the hypothesis sentence like in Table 1 to create additional new samples. The detailed construction of these samples can be seen in Appendix. By learning from these augmented datasets, the model should also be more robust to certain types of perturbations/biases of the data.
In Sec. 6.1, our experiments showed that while this approach can lead to less biased models, it cannot make the model completely bias-free. Another disadvantage of these data enhancement/augmentation approaches is that we need to know all the specific kinds of biases in advance. For instance, in order to reduce the CWB for ‘not’, one needs carefully balance the samples containing ‘not’ in the training set. However, lots of other words will exhibit similar biases (e.g., the model tends to predict neutral when it sees ‘also’) and it is impractical to identify and debias the dataset w.r.t. every type of bias. Therefore, besides fixing the dataset, we should also focus on directly debiasing models against lexical biases.
4 Model-Level Debiasing
Model-level debiasing methods have the advantage that there is no need to know the specific bias type in advance. Here we propose two different methods. The first method focuses on debiasing the content of word/sentence embeddings, where we aim to remove strong bias in the embeddings towards any of the labels so that there will be fewer shortcuts for models to exploit. The second method builds a separate shallow bag-of-words (BoW) sub-model and projects the primary model’s representation onto the subspace orthogonal to this BoW sub-model via the HEX projection algorithm Wang et al. (2019a). Our proposed methods can be applied to a wide range of baseline model architectures. In addition, none of our methods is bias-type specific, so the results on CWB and WOB should generalize to other similar lexical biases.
We use sentence-embedding based models as our baseline since they are more controllable, and because the interaction of sentences only appears at the top classifier, which makes it easier to compare the different effects of different regularization.333Another popular choice of NLI model architecture is the cross-attention based models Chen et al. (2017); Devlin et al. (2019) . In our current work, we choose to only apply our BoW Sub-Model approach on sentence-embedding based models since our approach directly regularizes the representation vector learned by the main model, and hence it is most suitable for models with a single vector containing rich information. On the other hand, cross-attention based models do most of the inference through cross-attention and do not learn such a single vector, making it hard to regularize the model effectively in a similar way. Investigation of similar HEX regularization methods for cross-attention models is future work.
. In our current work, we choose to only apply our BoW Sub-Model approach on sentence-embedding based models since our approach directly regularizes the representation vector learned by the main model, and hence it is most suitable for models with a single vector containing rich information. On the other hand, cross-attention based models do most of the inference through cross-attention and do not learn such a single vector, making it hard to regularize the model effectively in a similar way. Investigation of similar HEX regularization methods for cross-attention models is future work.Our baseline structures can be divided into three stages. The first stage is to embed the words into word embeddings. The second stage is to get the representations for each sentence. We use three layers of BiLSTM to get the representation. We also added residual and skip-connections as Nie et al. (2019), and find that it leads to better performance. For the final stage, our baseline follows Mou et al. (2016); Conneau et al. (2017) to concatenate these two sentence embeddings, their difference, and their element-wise product as follows:
The resulting vector is passed through another multi-layer perceptron (MLP) to get the final classification result.444Our baseline models achieve close to the best sentence embedding based/cross-attention based models reported on the NLI stress tests Naik et al. (2018) and are hence good starting points for this bias/debias analysis.
Next, we will describe two different methods to directly debias the model.
4.2 Debiasing Embeddings
Word embeddings are an important component in all neural NLP models. They contain the most basic semantics of words. Recent studies have shown that removing gender bias from word embeddings can lead to less biased models Zhao et al. (2018a). In our work, as we discussed in Q2 in Sec. 1, we explore whether similar ideas can be applied to reducing lexical dataset biases.
For a large number of lexical dataset biases (e.g., CWB), the model tends to predict the label based only on the existence of certain words. Hence, one natural conjecture is that there is a strong bias towards some labels in the word embeddings. Since the label bias is not an attribute of the word, but it is brought in by the model above, hence in order to remove such label bias from the embeddings at training time, we differ from Zhao et al. (2018a) to use the gradient-reversal trick Ganin and Lempitsky (2015); Xie et al. (2017).
The architecture of this approach is illustrated in Figure 1. We denote the embeddings of the two input sequences for our model as and respectively, where denotes the premise sentence while denotes the hypothesis sentence. In order to apply the reverse gradient trick Ganin and Lempitsky (2015) to the embeddings, we add a small embedding-debias network (the left blue box in Figure 1) for each of the embedding in our model. The embedding-debias network is a simple MLP. Since the other parts of the sentence context may also contribute to the bias, the debiasing network takes both and the sentence embedding of b (and vice versa for debiasing ) as the input and predicts the label . Therefore, the total loss of this method is:
Here, is the multitask coefficient. and are the lengths of two input sentences. is the standard classification loss using the main model and is the sum of all the classification loss using the debias network. are parameters of the embeddings and sentence encoder of the main model, are parameters of the top classifier of the main model, and are parameters of the embedding-debias network. In order to find the optimal parameters, we follow Ganin and Lempitsky (2015) to reverse the gradient for w.r.t. .
Besides this approach, we also tried two variants by changing the input of the debias network. The first one is emb_basic, where we only take the single embedding as the input. The second one only takes one sentence embedding as the input and is called ind_sent. The results of our embedding-debias methods are shown in Sec. 6.2.
4.3 Bag-of-Words Sub-Model Orthogonality
While debiasing the embeddings can robustify the models against certain biases, it may not be effective for all the lexical biases. Some lexical bias may exist at the deeper compositionality level (e.g., WOB), while debiasing the embeddings can regularize only the most basic semantics units instead of how these semantics units are composed by the model. In addition, removing the label biases may also hurt the useful semantics contained in the embeddings, leading to significant performance drops. A better approach is to leave the embedding intact, but try to regularize how the classifier uses these features. We observe that models exploiting dataset biases in the training set (e.g., CWB and WOB) tend to use very simple and superficial features to make the prediction. These models tend to ignore the order of the words, fail to learn compositionality, and do not have a deep semantic understanding of the sentences. Therefore, we aim to robustify the model by letting it use fewer simple and superficial features. With this motivation, we train a bag-of-words (BoW) model that only captures superficial patterns of the words without any word order/compositionality information. Then we use HEX projection Wang et al. (2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model.
BoW Model. For the BoW sub-model, we first get the embedding of all the words. Then, in order to capture more co-occurrence information of the words, we add a multi-head self-attention layer like the one used in Vaswani et al. (2017) (but without position embeddings), because we empirically find that this improves the performance. Finally, we use mean-pooling among all the vectors to get the BoW sentence-embedding: . To get a single representation for the sentence-pair, we used the same concatenation layer as in Eqn 1 and pass the vector through an additional MLP to get the representation .
HEX Projection. Next, in order to encourage the primary model to learn better features that are not learn-able by the BoW model, we used the HEX projection layer from Wang et al. (2019a)
, which was originally proposed to improve the domain generalization performance of computer vision models; here we combine HEX with BoW sub-model to robustify NLI models. With the addition of the BoW sub-model, we can get two representations of the sentence pairand . In order to let the final prediction to use high-level features that are to some extent independent of the shallow and high-biased BoW feature, HEX projection layer projects these two representations into orthogonal spaces to achieve the independence.
The inputs of the HEX projection layers are the BoW model output and the corresponding output of the main model . We use to denote the final classification network parameterized by . Next, by zero-masking one of the two inputs, the HEX projection layer can receive three different inputs and calculate three different vector outputs:
To ensure that the overall model learns different features than the BoW model, we project the joint output to the orthogonal space of to get :
The output learns good representations for both sentences but lies in the orthogonal space of the output got from BoW sub-model’s input, thus not over-emphasizing on word-pattern information. This vector goes through the softmax layer to calculate the probabilities for each label. Finally, we follow the original paperWang et al. (2019a) to minimize a weighted combination of the loss for and , and use for testing. In Sec. 6.2, we show that by adding the BoW sub-model orthogonality, the model can be more robust against CWB and WOB while maintaining competitive overall accuracy. Hence, as a response to Q3 in Sec. 1, our results indicate that debiasing models at the upper level with regularization on the compositionality is a more promising direction against lexical biases.
5 Experimental Setup
We evaluate our models using both off-the-shelf testing datasets as well as new datasets extracted from the original MNLI dataset. We use the word overlap and the negation sets from the NLI stress tests dataset Naik et al. (2018). These two evaluation sets from the NLI stress tests modified the original MNLI development set by appending some meaningless phrases (examples shown in Table 1). If the model has certain biases, then the model will be fooled by such perturbations and make the wrong classification.
In addition, we also extract samples from the original MNLI development dataset to get bias testing sets with exactly the same data distribution. We first select samples that follow the bias pattern from the matched development set. For CWB, we use ‘not’, ‘no’, ‘any’, ‘never’ ,and ‘anything’ as five example contradiction words. To make this testing set balanced for labels (contradiction vs non-contradiction for CWB and entailment vs non-entailment for WOB), we move some samples with the same pattern from the training set to this testing set.555While this makes our model’s performance incomparable to other literature, we train all the models in our experiments in this same setting to ensure the fairness of our analysis comparisons. All our experiments use the same val/test set. Later we refer to this dataset as Bal.
Since the negation dataset from NLI stress tests dataset only considers the word ‘not’, it fails to evaluate the bias for other contradiction words. We augment this dataset by creating new samples for other contradiction words. We denote the original NLI stress tests dataset as Stress and this augmented one as Stress*. Please refer to the Appendix for a detailed description of how we chose the example contradiction words and created our test sets.
Throughout our experiments, we select the best model during training on the MNLI mismatched development dataset and we tune all the hyper-parameters on the NLI Stress mismatch datasets. All the other datasets are only used as test sets and we only report results on these test sets. We use the MNLI matched development dataset to evaluate the overall performance of the model.
Overall accuracy is widely used as the only metric for NLI. However, models can get very high accuracy by exploiting the bias patterns. Hence, in order to test how the model performs when it cannot exploit the bias pattern, we focus on model’s accuracy on the harder parts of the data (Acc_hr) where the bias pattern is wrong666One may wonder if biases can also be evaluated simply using generalization performance. However, good generalization to current datasets (e.g., SNLI Bowman et al. (2015), MNLI Williams et al. (2018), SICK Marelli et al. (2014), etc.) is different from being bias-free. As shown in Gururangan et al. (2018), similar annotation artifacts can appear in multiple different datasets. So by overfitting to common lexical biases across multiple datasets, biased models might still reach higher generalization accuracy.. For the balanced testing set, this subset means samples with ‘non-contradiction’ label for CWB case and samples with ‘non-entailment’ label for the WOB case. For the NLI stress tests dataset777Another metric on NLI-stress can be checking the portion of model predictions on the hard data that is correct both before and after adding the extra words. We empirically verified that this metric shows the same result trends as Acc_hard., this subset means the samples with ‘non-contradiction’ label for the CWB set and the samples with ‘entailment’ label for the WOB set. Ideally, for an unbiased model, it should both have competitive overall performance and perform almost equally well on these harder parts of the data. Hence, we focus on maintaining the accuracy on the whole dataset and improving the Acc_hr metric. All training details and hyper-parameter settings are presented in Appendix.
The performance for BoW sub-model orthogonality on CWB and WOB. The means and standard deviation here are averaged over five random runs.
6.1 Data-Level Debiasing Results
We first show our baseline’s performance on the CWB biases in the first row of Table 2. Since we observe similar performance for CWB and WOB, we leave the results for WOB in Appendix. On every dataset, there’s a significant gap between Acc and Acc_hr, showing the baseline has both strong CWB bias and strong WOB bias. For the data augmentation/enhancement experiments, we report results after adding 500/20,000/50,000 additional samples. We demonstrate the effect of adding a small portion of data for the 500 case and the limitation of this method using the 20,000 and 50,000 cases.888Adding additional data (e.g., 50,000) can change the label distribution, but we have experimented with different numbers of additional data between 500 and 50,000 and the reported trend always holds. The results are again shown in Table 2. We use “+origin” to denote the results from data enhancement using the original dataset and use “+synthetic” to denote the results from data augmentation by generating new synthetic data similar to NLI stress tests.999We run all the experiments 5 times and report the mean.
With a small number of additional data (500), wherever the data comes from, the performance on the balanced testing set remains very close. However, the performance on the NLI stress tests improves significantly when it sees 500 synthetic new samples generated in the same way. The gap between the overall accuracy and the Acc_hr on NLI stress tests is reduced to less than 5%, which means that the models can easily learn how the synthetic data is generated through only 500 samples. Next, we compare the performance after adding 20,000 and 50,000 additional data to check the limitation of the improvement from adding additional data. With this amount of additional original data, the Acc_hr on the balanced dataset improves and the model is less biased. However, adding 20,000/50,000 synthetic samples doesn’t always lead to the improvement on the balanced dataset. This reflects that the generation rules of NLI stress tests dataset are too simple so that training on these adversarial samples is not a good way to robustify the model. However, more natural and diverse synthetic data may be helpful to robustify the models.
There is still a significant gap between overall accuracy and Acc_hr even after 50,000 samples. Also, the effect of adding the last 30,000 data is very small, indicating a clear limitation of this method. Thus, doing simple data augmentation/enhancement only using the currently available resources is insufficient to fully debias the model. In addition, one has to carefully select which data to add for each different bias, so we need to also design inherently more robust models.
6.2 Model-Level Debiasing Results
Debiasing Embeddings (Lower Level Model Debiasing). We compared three variants of debiasing embeddings in Table 3. Empirically, we observe that training the whole model with the debias network from a pre-trained baseline can significantly improve the stability of results, so we perform our experiments from one baseline with average performance for fair comparisons. The multi-task coefficient controls the trade-off between high accuracy and little bias. Here we report the results with , which we find is one good balance point. From both tables, none of the methods achieved a significant improvement on the Acc_hr metrics. The best results come from the emb_basic approach, but even this method only achieves small improvement on the Acc_hr metric for CWB but does worse on WOB and has a comparable loss on overall Acc. We do not observe any significantly larger improvements with smaller or larger . We also tried other techniques to further stabilize the training (e.g., freezing the main model when training, using different optimization algorithms), but we observe no significant improvement.
Therefore, while removing the bias from the embeddings is effective for reducing gender bias (e.g., remove the male bias from the word ‘doctor’ to make the embedding gender-neutral), it does not help in debiasing certain lexical biases. Directly removing information from the embedding only slightly debiases the model but also hurts the overall performance. The difference in these results highlights the difference between gender bias and lexical bias problems. As shown in these experiments, lexical biases cannot be effectively reduced at the embedding level. We argue that this is because a majority of lexical biases appear at the compositionality level. For example, for WOB, a biased model will predict “entailment” entirely relying on the overlapping word embeddings on both sides. Here, even when we make the embeddings completely unbiased, as long as the upper model learns to directly compare the overlapping of embeddings on both sides, there will still exist a strong WOB bias in the model. Hence, in order to robustify models towards lexical bias, we need to develop methods that regularize the upper-interaction part of the model.
BoW Sub-Model Orthogonality (Higher Level Model Debiasing). Results for adding the BoW sub-model are shown in Table 4. Here, we also show that the improvement trend holds regardless of minor hyper-parameter changes in the model (number of layers). On both CWB and WOB, the model shows a large improvement on Acc_hr for both Bal and stress-test datasets.
We achieve close or higher Acc on all the bias testing sets and the overall Acc is only 1.4%/1.3% lower than the baseline, showing that adding a BoW sub-model orthogonality will only slightly hurt the model. In conclusion, this approach significantly robustifies the model against CWB and WOB while maintaining competitive overall performance. In comparison to the debiasing embeddings results, we can see that instead of regularizing the content in the word embeddings, regularizing the model’s compositionality at the upper interaction level is a more promising direction for debiasing lexical biases. We have also tried combining this method with the data-level debiasing approach above but get no further improvement.101010 We also tried some initial simple ensembles of 2 different initializations of BoW sub-models, so that we can potentially regularize against a more diverse set of lexicon biases. When training, the main model is paired with each BoW sub-models to go through each HEX layers and then the output logits are averaged to get the final logits. This ensembling results also outperform the baseline significantly and is higher than the single BoW Sub-Model in WOB Stress, but equal or worse in the other cases. We leave the exploration of different/better ways of ensembling to future work.
We also tried some initial simple ensembles of 2 different initializations of BoW sub-models, so that we can potentially regularize against a more diverse set of lexicon biases. When training, the main model is paired with each BoW sub-models to go through each HEX layers and then the output logits are averaged to get the final logits. This ensembling results also outperform the baseline significantly and is higher than the single BoW Sub-Model in WOB Stress, but equal or worse in the other cases. We leave the exploration of different/better ways of ensembling to future work.
6.3 Qualitative Feature Analysis
We use LIME Ribeiro et al. (2016) to qualitatively visualize how orthogonal projection w.r.t. BoW sub-model changes the features used by the model. We selected one example from the CWB Bal dataset to see how applying the BoW model with HEX corrects previous mistakes. From Fig. 3, we can see that before applying the BoW sub-model (the upper part of the figure), the model predicts the contradiction label almost solely based on the existence of the word “no” in the hypothesis. However, after applying our BoW sub-model with HEX projection, our model can give higher importance to other useful features (e.g., the match of the two “bad” tokens, and the match of important past-tense temporal words such as “passed” and “longer” in the premise-hypothesis pair) despite the fact that “no” still has high influence towards the contradiction label. Another example from the CWB Stress* dataset can be seen in Appendix.
We study the problem of lexical dataset biases using WOB and CWB as two examples. We first showed that lexical dataset biases cannot be solved by simple dataset changes and motivate the importance of directly designing model-level changes to solve this problem. For model-level changes, we first show the ineffectiveness of embedding-debiasing approaches, thus highlighting the uniqueness of lexical bias against gender bias problems. Next, we robustify the model by forcing orthogonality between a BoW sub-model and the main model and demonstrate its effectiveness through several experiments. Since none of our methods is bias-type specific, we believe these results can also be generalized to other similar lexical biases. Finally, we would like to point out that our methods and results here do not mean to belittle the importance of collecting clean/unbiased data. We strongly believe in the importance of unbiased data for model design and evaluation. However, some biases are inherent and inevitable in the natural distribution of the task (e.g., for NLI, it is natural that sentences with high overlapping are most likely entailment pairs). Therefore, our work stresses that it is also very important to encourage the development of models that are unlikely to exploit these inevitable biases/shortcuts in the dataset. Neither model-level debiasing nor data-level debiasing alone is the conclusive solution for this problem. Joint efforts are needed for promoting unbiased models that learn true semantics; and we hope our paper can encourage more work towards this important direction.
We thank Snigdha Chaturvedi, Shashank Srivastava, and the reviewers for their helpful comments. This work was supported by DARPA YFA17-D17AP00022, NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871. The views in this article are the authors’, not of the funding agency.
- Don’t take the premise for granted: mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 877–891. Cited by: §2.
- On adversarial removal of hypothesis-only bias in natural language inference. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), Minneapolis, Minnesota, pp. 256–262. External Links: Cited by: §2.
- Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NeurIPS, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 4349–4357. External Links: Cited by: §2.
- A large annotated corpus for learning natural language inference. In EMNLP, pp. 632–642. Cited by: §1, §3, footnote 6.
- Semantics derived automatically from language corpora contain human-like biases. Science 356 (6334), pp. 183–186. Cited by: §2.
- Enhanced lstm for natural language inference. In ACL, Vol. 1, pp. 1657–1668. Cited by: footnote 3.
Don’t take the easy way out: ensemble based methods for avoiding known dataset biases.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4060–4073. Cited by: §2.
- Supervised learning of universal sentence representations from natural language inference data. In EMNLP, pp. 670–680. Cited by: §4.1.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: Appendix A, Appendix D, footnote 3.
- Adversarial removal of demographic attributes from text data. In EMNLP, pp. 11–21. Cited by: §2.
Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180–1189. Cited by: §2, §4.2, §4.2.
- Breaking nli systems with sentences that require simple lexical inferences. In ACL, Vol. 2, pp. 650–655. Cited by: §1.
- Annotation artifacts in natural language inference data. In NAACL-HLT, Vol. 2, pp. 107–112. Cited by: §1, §1, §2, footnote 6.
Similarity measures in scientometric research: the jaccard index versus salton’s cosine formula. Information Processing & Management 25 (3), pp. 315 – 318. External Links: Cited by: §B.1, §B.2, §3.
- Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pp. 132–142. Cited by: §2.
- Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: Appendix A.
- Adversarial examples for evaluating reading comprehension systems. In EMNLP, pp. 2021–2031. Cited by: §3.
- Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, External Links: Cited by: Appendix A.
- Towards robust and privacy-preserving text representations. In ACL, Vol. 2, pp. 25–30. Cited by: §2.
- Semeval-2014 task 1: evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In SemEval, pp. 1–8. Cited by: footnote 6.
Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007. Cited by: §1.
- Adversarially regularising neural nli models to integrate logical background knowledge. CoNLL 2018, pp. 65. Cited by: §2.
- Natural language inference by tree-based convolution and heuristic matching. In ACL, Vol. 2, pp. 130–136. Cited by: §4.1.
- Stress test evaluation for natural language inference. In COLING, pp. 2340–2353. Cited by: Appendix C, Table 1, §1, §1, §2, §3, §5.1, footnote 4.
- Analyzing compositionality-sensitivity of nli models. In AAAI, Vol. 33, pp. 6867–6874. Cited by: §1, §2, §4.1.
- Adversarial over-sensitivity and over-stability strategies for dialogue models. In CoNLL, pp. 486–496. Cited by: §3.
- Glove: global vectors for word representation. In EMNLP, pp. 1532–1543. Cited by: Appendix A.
- Hypothesis only baselines in natural language inference. In *SEM, pp. 180–191. Cited by: §1, §2.
- ” Why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144. Cited by: §6.3.
Dropout: a simple way to prevent neural networks from overfitting. JMLR 15 (1), pp. 1929–1958. Cited by: Appendix A.
- Testing the generalization power of neural network models across nli benchmarks. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 85–94. Cited by: §1.
- Attention is all you need. In NeurIPS, pp. 5998–6008. Cited by: §4.3.
- Learning robust representations by projecting superficial statistics out. In ICLR, External Links: Cited by: Appendix A, §1, §2, §4.3, §4.3, §4.3, §4.
What if we simply swap the two text fragments? a straightforward yet effective way to test the robustness of methods to confounding signals in nature language inference tasks.
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 7136–7143. Cited by: §1.
Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5310–5319. Cited by: §3.
- A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT, Vol. 1, pp. 1112–1122. Cited by: §B.1, §1, §1, §3, footnote 6.
- Controllable invariance through adversarial feature learning. In NeurIPS, pp. 585–596. Cited by: §2, §4.2.
- Men also like shopping: reducing gender bias amplification using corpus-level constraints. In EMNLP, Cited by: §2.
- Gender bias in coreference resolution: evaluation and debiasing methods. In NAACL-HLT, Vol. 2. Cited by: §2, §4.2, §4.2.
- Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4847–4853. Cited by: §2.
Appendix A Training Details
For all our models except BERT Devlin et al. (2019), we use pre-trained 300-dimension GloVe Pennington et al. (2014) word embeddings to initialize the embedding layers. The hidden dimension of LSTM Hochreiter and Schmidhuber (1997) is 300. We use Adam Kingma and Ba (2015) as the optimizer and the initial learning rate is set to 0.0004. We apply dropout Srivastava et al. (2014) with a rate of 0.4 to regularize our model. For the model with HEX projection, we apply all the tricks in the original paper Wang et al. (2019a) (column-wise normalize the input features in every batch, fine-tune from a trained model with the bottom layer fixed) to stabilize the training. In our experiments, we set the multi-task coefficient between loss for and to 1.0 and 0.3.
Appendix B Detailed Description of the Extraction of Balanced Testing Sets
b.1 Extraction of the Contradiction-Word-Bias Testing Set
For evaluating the contradiction-word-bias (CWB), we look for words that both have a strong bias towards the ‘contradiction’ label and have a significant number of samples in the training set. We first select ‘no’, ‘any’, ‘never’ and ‘anything’, which are four most frequent words with over 50% of samples in the training data containing these words labeled as ’contradiction’. Since most of the analysis papers also study the bias of ‘not’, here we also include the ‘not’ as the contradiction word. However, as in the training set of MNLI Williams et al. (2018), only 45.3% of the samples are ‘contradiction’, so the bias of ‘not’ is actually not as strong as the other words.
Next, in order to create a balanced dataset for these selected contradiction-words, we first select the samples containing these words from the matched development set. In order to let the samples be more difficult and better test the model’s bias. We only select the samples where the hypothesis samples contain the contradiction word, while there’s no negation word in the premise sentence (so that the contradiction word is generated by the annotator instead of copying from the premise sentence). Since the bias of ‘not’ is not uniformly strong, here we only select samples that both contain ‘not’ and have small Jaccard distance Hamers et al. (1989) between the sentence pairs, which we empirically find that the bias is stronger.
After selecting these samples, we can extract a testing set with most of the samples labeled as contradiction, but the label distribution is severely unbalanced. In order to balance the label distribution, we randomly sample some examples from the training set using the same criterion (containing contradiction word in the hypothesis sentence but no negation word in the premise sentence) and put them in the testing set. Our resulting dataset contains 1100 samples with 550 are labeled as contradiction and the other 550 are non-contradiction labels. Since the domain of the training set is different from the domain of the mismatched validation set, we only extract a balanced test set based on the matched validation set.
|no||and false is no true|
|any||and any true is true|
|never||and false is never true|
|anything||and anything true is true|
|not||and false is not true|
b.2 Extraction of the Word-Overlapping-Bias Testing set
We first sort the samples in the MNLI matched validation set using Jaccard distance Hamers et al. (1989) and choose the samples with the smallest distance (highest overlapping). In order to match the size of the contradiction-word-bias testing set, we select the top 550 samples with entailment label and the top 550 samples with non-entailment label to get a dataset with high word overlapping but balanced label distribution.
Appendix C Construction of Synthetic Data
We follow the construction rule of the NLI stress tests Naik et al. (2018) to generate synthetic data for the training set. We appended meaningless sentences at the end of the hypothesis sentence and keep the original label unchanged. For CWB, we focus on 5 different contradiction words: ‘no’, ‘any’, ‘never’, ‘anything’ ,and ‘not’. Therefore, for each sentence pair, we create five different new pairs by appending five different phrases for evaluating the bias of each contradiction word. The appended phrases are listed in Table 5. For WOB, we also follow Naik et al. (2018) to append ‘and true is true’ to every hypothesis sentence to create one new pair for each sample.
Appendix D Data Augmentation/Enhancement Results for BERT
The data augmentation/enhancement results for BERT-base Devlin et al. (2019) is shown in Table 7 and Table 8. 111111We run all the experiments 5 times and report the mean. As is shown in Table 7, BERT shows significant performance gap between Acc and Acc_hr on both CWB datasets, indicating BERT’s clear bias on CWB. As for WOB, the gap between Acc and Acc_hr for Bal is much smaller, however, the performance on Stress is very poor. Therefore, we assume that even though BERT achieves a high score on the WOB Bal dataset, BERT is just overfitting the dataset in another different way, i.e., there is still significant WOB bias in BERT. In conclusion, in our experiment, BERT still shows significant CWB and WOB.
Similar to our main data augmentation/enhancement results, here we find that after adding 500 additional synthetic samples, BERT can quickly learn their pattern. But still, adding more synthetic data doesn’t help improve the performance on the Bal dataset. For BERT, we also cannot see any significant improvement when adding additional original samples. In all the + origin experiments, BERT performs similarly. Again, this shows the limitation of the data augmentation/enhancement approach, especially starting with a stronger baseline as BERT.
Appendix E More Qualitative Feature Analysis
In Fig. 4, we can see the feature importance change before/after adding the BoW sub-model for a CWB Stress* example (we chose a borderline example where the prediction distribution change to the correct label is not extreme). We can see that before adding the BoW sub-model orthogonality-projection, the extra misleading words (both “and” and “not”) confused the model to predict the wrong contradiction label, while after adding the BoW sub-model, our model can assign higher weights to useful features such as “have”, “before”, etc.