Natural language inference (NLI) (also known as recognizing textual entailment) is a widely studied task which aims to infer the relationship (e.g., entailment, contradiction, neutral) between two fragments of text, known as premise and hypothesis [9, 10]. NLI models are usually required to determine whether a hypothesis is true (entailment) or false (contradiction) given the premise, or whether the truth value can not be inferred (neutral). A proper NLI decision should apparently rely on both the premise and the hypothesis. However, some recent studies [19, 36, 40] have shown that it is possible for a trained model to identify the true label by only looking at the hypothesis without observing the premise. The phenomenon is referred to as annotation artifacts , statistical irregularities 
or partial-input heuristics. In this paper we use the term hypothesis-only bias  to refer to this phenomenon.
Such hypothesis-only bias originates from the human annotation process of data collection. In the data collection process of many large-scale NLI datasets such as SNLI snli:emnlp2015 and MultiNLI mnli:N18-1101, human annotators are required to write new sentences (hypotheses) based on the given premise and a specified label among entailment, contradiction and neutral. Some of the human-elicited hypotheses contain patterns that spuriously correlate to some specific labels. For example, 85.2% of the hypothesis sentences which contain the phrase video games were labeled as contradiction. The appearance of video games in hypothese can be seen as a stronger artificial indicator to the label contradiction.
To get a deeper understanding of the specific bias captured by NLI models in the training procedure, we try to extract explicit surface patterns from the training sets of SNLI and MultiNLI, and show that the model can easily get decent classification accuracy by merely looking at these patterns. After that, we derive hard (adversarial) and easy subsets from the original test sets. They are derived based on the indication of the artificial patterns in the hypotheses. The gold labels of easy subsets are consistent with such indication while those of hard subsets are opposite to such indication. The model performance gap on easy and hard subsets shows to what extend a model can mitigate the hypothesis-only bias.
|Multi-word Patterns||Unigram Patterns|
|SNLI||in this picture||96.4||tall human||99.7||Nobody # # .||99.8||outdoors||78.8||vacation||91.0||Nobody||99.7|
|A human||96.4||A sad||95.6||dog # sleeping||97.5||sport||75.1||winning||89.9||No||95.8|
|A # # outdoors .||95.9||A # human||94.1||There # no||96.2||instrument||74.4||favorite||88.7||cats||93.4|
|A # # outside .||89.8||the first||88.6||in # bed||94.2||animal||68.5||date||87.4||naked||88.7|
|is near # # .||87.6||on # way||87.0||at home||93.5||moving||67.8||brothers||85.6||tv||88.4|
|MultiNLI||It # possible||71.7||, said the||93.6||There are no||92.4||Several||54.7||addition||69.6||None||85.4|
|There # a # # the||70.8||They wanted to||81.4||does not # any||91.9||Yes||54.4||also||68.6||refused||80.5|
|There is an||68.8||the most popular||78.7||no # on||91.5||various||53.7||locals||65.7||never||79.0|
|are two||67.0||addition to||78.4||are # any||90.1||53.1||battle||63.3||perfectly||77.3|
|There # some||65.9||because he was||77.8||are never||89.9||According||53.1||dangerous||63.2||Nobody||77.1|
Top 3 artificial patterns sorted by the pattern-label conditional probability(Sec 2.1). The listed patterns appear at least in 500/200 instances in SNLI/MultiNLI training sets, notably the numbers 500/200 here are chosen only for better visualization. ‘#’ is the placeholder for an arbitrary token. The underlined artificial pattern serves as an example in Sec 2.1.
After analyzing some competitive NLI models, including both non-pretrained models like Infersent , DAM  and ESIM  and popular pretrained models like BERT , XLNet  and RoBERTa , we find that the hypothesis-only bias makes NLI models vulnerable to the adversarial (hard) instances which are against such bias (accuracy 60% on InferSent), while these models get much higher accuracy (accuracy
95% on InferSent) on the easy instances. This is an evidence to show that the NLI models might be over-estimated as they benefit a lot from the hints of artificial patterns.
A straightforward way is to eliminate these human artifacts in the human annotation process, such as encouraging human annotators to use more diverse expressions or do dataset adversarial filtering  and multi-round annotation . However in this way, the annotation process would inevitably become more time-consuming and expensive.
To this end, this paper explores two ways based on the derived artificial patterns to alleviate the hypothesis-only bias in the training process: down-sampling and adversarial debiasing. We hope they would serve as competitive baselines for other NLI debiasing methods. Down-sampling aims at reducing the hypothesis-only bias in NLI training sets by removing those instances in which the correct labels may easily be revealed by artificial patterns. Furthermore, we exploring adversarial debiasing methods [1, 2]
for the sentence vector-based models in NLI[45, 25, 42, 27]. The experiments show that the guidance from the derived artificial patterns can be helpful to the success of sentence-level NLI debiasing.
In this section, we identify the artificial patterns from the hypothesis sentences which highly correlate to specific labels in the training sets and then derive hard, easy subsets from the original test sets based on them.
2.1 Artificial Pattern Collection
‘Pattern’ in this work refers to (maybe nonconsecutive) word segments in the hypothesis sentences. We try to identify the ‘artificial patterns’ which spuriously correlate to a specific label due to certain human artifacts.
We use to represent a set of artificial patterns. and denotes the max length of the pattern and the max distance between two consecutive words in a pattern, respectively. For a artificial pattern , there exists a specific label for that the conditional probability . For example, for the underlined pattern ‘A # # outdoors .’ in Table 1, the length of this pattern is 3, and the distance between the consecutive words ‘A’ and ‘outdoors’ is 2. Its conditional probability with the label entailment is 95.9%. Notably, all the recognized artificial patterns in our paper appear in at least 50 instances of the training sets to avoid misrecognition111Suppose a pattern only appears once in a training instance, its always equals 1 for the label in that instance..
In the rest of paper, unless otherwise specified, we set 222We also tried larger and , e.g. 4 or 5, but did not observe considerable changes of the artificial patterns, e.g. 95.4% patterns in H(5,5,0.5) are covered by H(3,3,0.5).. By doing so, we only tune the hyper-parameter in (3,3,) to decide using a rather strict (smaller ) or mild (bigger ) strategy while deriving the artificial patterns.
2.2 Analysis of Hypothesis-only Bias
trained a sentence-based hypothesis-only classifier which achieves decent accuracy. Different from them, we show that in Table2 the classifier which merely uses the artificial patterns as features achieves comparable performance with the fasttext  classifier. Table 2 shows the classifier based on multi-word patterns with the default and (see Footnote 2, (3,3,0.5)) achieves much higher accuracy than that based on only unigram patterns ((1,1,0.5)).
We also compare the test accuracies on the easy and hard sets (Sec 2.3) of the baseline models (I-9,D-9,E-9) in Table 5 and 6. Empirically we find that the NLI models achieve very high accuracy on the easy sets while performing poorly on the hard sets. We also observe the same tendency in the models trained on the randomly downsampled training sets (e.g. I-1, I-3, I-5, I-7, etc.). It shows that NLI models fit the artificial patterns in the training set very well, which makes them fragile to the adversarial examples (hard set) which are against these patterns. Thus we assume the artificial patterns contributes to the hypothesis-only bias.
2.3 Hard and Easy Subsets
Some instances contain artificial patterns that are strong indicators to the specific labels. We treat the instances in the test sets which are consistent with such indication as ‘easy’ ones and those instances which are against such indication as ‘hard’ ones.
For easy subsets, the labels of all the artificial patterns in the specific hypotheses must be consistent with the gold labels. We show an easy instance below: the artificial patterns ‘The dogs are # on’ and ‘bed .’ (bed is the last word of the sentence) are strong indicators to the correct classification.
|Premise: Two cats playing on the bed together .|
|Hypothesis: The dogs are playing on the bed .|
|Gold Label: Contradiction|
|Artificial patterns: (bed ., Contradiction, 83.2%|
|); (The dogs are # on, Contradiction, 82.9%)|
(a) An easy instance
Premise: A bare-chested man fitting his head and arm
into a toilet seat ring while spectators watch in a city.
Hypothesis: A gentleman with no chest hair ,
wrangles his way through a toilet seat .
Gold Label: Entailment
Artificial patterns: (no, Contradiction, 82.7%)
(his way, Neutral, 82.4%)
(b) A hard instance
For the hard subsets, on the other side, the indications of the artificial patterns should be all different from gold labels. We also show a hard instance below: in this situation, the artificial patterns ‘no’333‘no’ is different from ‘No’ shown in Table 1 as the latter indicates the word appears in the beginning of the sentence. and ‘his way’ may misguide the NLI models to the wrong answers.
Notably we do not put instances with conflicting indications (e.g. an instance with 2 artificial patterns, one of which has the same label with the gold label while the other does not) into easy or hard subsets to build more challenging adversarial examples.
The sizes of hard and easy sets actually depend on how we harvest artificial pattern, i.e. in (Sec 2.1 and Footnote 2). For the sake of simplicity, we utilize and 444MultiNLI’s pattern-label conditional possibilities are generally smaller than those of SNLI as shown in Table 1. So we use smaller to ensure the size of derived subsets. as the thresholds to derive easy and hard subsets for SNLI and MultiNLI respectively in the following experiments, as adopting a relatively bigger can choose the instances which largely accord with the artificial patterns and are thus eligible to serve as adversarial examples.
The sizes of easy and hard sets in SNLI test set, MultiNLI-matched dev set and MultiNLI-mismatched dev set are 327/1760; 410/1032; 371/1085 respectively. 555The datasets used in this paper can be found in https://tyliupku.github.io/publications/ The performance of an ideally unbiased NLI model on the easy and hard sets should be close to each other. Besides we should not see huge gap between the model accuracy on the easy and hard subsets.
We set up both pretrained and non-pretrained model baselines for the proposed adversarial datasets. We rerun their public available codebase with the default hyper-parameter and optimizer settings, including InferSent666https://github.com/facebookresearch/InferSent, DAM777https://github.com/harvardnlp/decomp-attn, ESIM888https://github.com/coetaur0/ESIM, BERT (uncased base), XLNet (cased base) and RoBERTa (base)999https://github.com/huggingface/transformers. For BERT, XLNet and RoBERTa, we concatenate the premise sentence and hypothesis sentence with [SEP] token as the input. For output classifier, we use a linear mapping to transform the vector at the position of [CLS] token at the last layer of these pretrained models to a normalized 3-element vector (using softmax) which represents the scores for each label. We report the test accuracies on easy, hard subsets and the UW+CMU hard subsets  which are derived from a hypothesis-only classifier. From Table 4, we can tell that the proposed hard sets are more challenging than UW+CMU hard subsets.
3 Exploring Debiasing Methods
(a) Models trained on SNLI
(b) Models trained on MultiNLI
3.1 Down-sampling Baselines
(a) InferSent trained on SNLI
(b) DAM trained on SNLI
(c) ESIM trained on SNLI
(a) InferSent trained on MultiNLI
(b) DAM trained on MultiNLI
(c) ESIM trained on MultiNLI
Sec 2.2 verifies that the artificial patterns lead to correct hypothesis-only classification, which motivates us to remove such patterns in the training sets by down-sampling. Specifically we down-sample the training sets of SNLI and MultiNLI and retrain 3 prevailing NLI models: InferSent, DAM and ESIM.
3.1.1 Downsampling Details
We down-sampled the training sets by removing the biased instances (‘Debias’ mode) that contain the artificial patterns.
Choosing down-sampling threshold : The threshold is exactly the same defined in Sec 2.1. We consider a training instance as a biased one even if it contains only one artificial pattern. When adopting smaller , we harvest more artificial patterns as described in Sec 2.1. Accordingly more training instances would be treated as biased ones and then filtered. In a word, smaller represents more strict down-sampling strategy in terms of filtering the artificial patterns. serves as the lower bound because the highest pattern-label conditional probability ( in Sec 2.1) for premises, which aren’t observed the same bias as hypotheses, is less than 0.5 in both SNLI and MultiNLI training set..
Ruling out the effects of training size: The model performance might be highly correlated with the size of training set. To rule out the effects of training size as much as possible, we set up randomly down-sampled training sets (‘rand’ mode) with the same size as the corresponding ‘debias’ mode under different for a fair comparison.
Keeping the label distribution balanced: After removing the biased instances in the training set by different (‘debias’ mode), suppose we get () instances for the 3 pre-defined labels of NLI in the down-sampled training set. Then we down-sample the subsets with instances to instances and get a dataset with instances. For the corresponding ‘rand’ mode, we sample instances for each pre-defined label from training set.
Convincing scores of multiple runs: To relieve the randomness of randomly down-sampling and model initialization, for the ‘rand’ mode in Table 5 and 6, we firstly randomly down-sample the training set (with the label distribution balanced) according to different for 5 times and get 5 randomly down-sampled training sets for each . Then for each down-sampled training set, we run 3 independent experiments with random model initialization under the same experimental settings. So each score in the ‘rand’ mode of Table 5 and 6 comes from 15 independent runs. The scores in the ‘debias’ mode of Table 5 and 6 come from 5 independent runs with random model initialization.
From table 5 and 6, we observe that: 1) The NLI models fit the bias patterns in the hypotheses very well even in the small-scale randomly down-sampled training sets (I-1, D-1 and E1) which only accounts for 4.0% of the original training set (SNLI), as the performance gaps between easy and hard subsets in these settings are still huge (40% for SNLI in Table 5).
2) Under the same , the proposed ‘debias’ down-sampling not only outperforms its ‘rand’ counterpart in terms of hard subsets, but also greatly reduce the performance gap on easy and hard sets.
3) The gains on hard sets on MultiNLI are smaller than those on SNLI as MultiNLI is less biased regarding the pattern-label conditional probability (Table 1). Down-sampling achieves larger gains on more biased datasets. In SNLI, the ‘debias’ down-sampling even outperforms the baseline models (I-8 vs I-9, D-8 vs D-9, E-8 vs E-9), which is really impressive as the training size of I-8, I-8 and E-8 is only 67.5% of the baseline models.
 expressed concerns upon down-sampling (DS) methods: 1) Will removing the artificial patterns cause new artifacts? (e.g. removing the word ‘no’, which is a strong indicator for contradiction may leave the remaining dataset with this word mostly appearing in the neutral or entailment classes thus create new artifact) and 2) Will DS methods prevent the models to learn specific inference phenomena (e.g. ‘animal’ is a hypernym of ‘dog’)? First of all, different from  which only considered unigram patterns, our artificial patterns are mostly multi-word patterns rather than unigram patterns as the former usually has larger concurrent probability as shown in Table 1. Our intention is to use the multi-word patterns to capture the specific ways of expression (human artifacts), rather than single words, of the human annotators. For the first concern, instead of filtering the unigram ‘no’, we prefer removing multi-word patterns which contain ‘no’, such as ‘There are no’ or ‘no # on’ for MultiNLI as shown in Table 1. For the hypernym mentioned in the second concern, as we prefer filtering multi-word patterns like ‘The dogs are # on’, we would not deliberately filter the unigram ‘dog’ unless adopting very aggressive DS strategy () in both SNLI and MultiNLI.
3.2 Adversarial Debiasing
Since the hypothesis-only bias comes solely from the hypothesis sentence, we wonder if it is possible to get rid of these biases via debiasing the hypothesis sentence vector. More specifically, we focus on the ‘sentence vector-based models’ 101010It would be more challenging to manipulate the gradients in the non-sentence vector-based models, e.g. models which contain interactions between hypothesis and premise sentence encoders like . We leave this to the future work. category as defined on SNLI’s web page111111https://nlp.stanford.edu/projects/snli/. Notably the idea of debiasing NLI via adversarial training has been proposed before [1, 2]. We hereby briefly introduce how we implement our adversarial training and how we incorporate instance reweighting method in this framework.
In the following experiments, we use the full training sets without any down-sampling. We use the InferSent 
(biLSTM with max pooling) model as the benchmark sentence encoder.
3.2.1 Adversarial Debiasing Framework
As shown in Fig 1, given the outputs of hypothesis and premise encoders , we are interested in predicting the NLI label using a classifier , . In addition, we train a hypothesis-only discriminator trying to predict the correct label solely from the hypothesis sentence representation by modeling . We formulate the training process in the adversarial setting by a min-max game. Specifically we train the discriminator to predict the label using only hypothesis sentence vector. Additionally we train the sentence encoder , and the classifier to fool the discriminator without hurting inference ability. is a hyper-parameter which controls the degree of debiasing.
We train the encoders, discriminator and classifier in Eq 1 together with a gradient reversal layer  as shown in Fig 1. We negate the gradients from the discriminator (red arrow in Fig 1) to push the hypothesis encoder to the opposite direction while update its parameters. The usage of gradient reversal layer makes it easier to optimize the min-max game in Eq 1 [43, 6] than training the two adversarial components alternately like Generative Adversarial Nets (GANs) . We update the model parameters by gradient descending ( is the batch size):
3.2.2 Guidance from Artificial Patterns
The artificial patterns turns out to be useful guidances for both the discriminator and the classifier as they indicate whether an instance is biased or not. We thus reweight the training instances in the training set based on the division of ‘biased’ and ‘unbiased’ training subsets.
Guidance for Discriminator: During the adversarial process, we optimize the discriminator by maximizing the log likelihood loss like Eq 2
. We find increasing the weights of the biased instances in the training set is of great help to the adversarial debiasing model. Because in this way, the discriminator can learn more from the biased instances to better fit the hypothesis-only bias. The whole adversarial debiasing training process could benefit from a stronger hypothesis-only discriminator. Formally, we replace negative log likelihood loss function in Eq2 with a weighted loss function:
, where denotes the whole training corpus. The division of biased and unbiased training subsets depends on the debiasing threshold (just like the down-sampling threhold in Table 5 and 6). is a hyper-parameter which reflects the attention on biased instances for the hypothesis-only discriminator.
Guidance for Classifier: Similar to the re-weighting method in Eq 5, we also apply the re-weighting strategy on the parameter learning for the inference classifier in Eq 3. We hope the classifier can capture the concrete semantics in NLI instead of over-fitting the artificial patterns in the hypotheses. Thus we increase the weights of the unbiased training subset in the loss function of Eq 3.
, where is a threshold to control the attention the models pay on the unbiased instances.
3.2.3 Training Details
Apart from the weighted loss functions guided by the artificial patterns, we also investigate the following two techniques in the adversarial training process.
Multiple discriminators: The min-max game in Eq 1 could benefit from stronger discriminators. So we try discriminators to enhance its ability to do hypothesis-only classifications. In our experiments, we find that is the best configuration for the discriminator.
Dynamic reweighting: For hyper-parameter ( and in Eq 5 and Eq 6 respectively), we find it useful to adjust dynamically in the training process. and represents the initial values we set before training and its value after training iterations respectively. Additionally we set up a hyper-parameter to control the gap of model accuracies on the easy and hard subsets in the dev set.
where is a hyper-parameter set as 0.5 for models trained on both datasets. Besides, we set as 0.15 and 0.10 for SNLI and MultiNLI respectively. Notably although we update the hyper-parameters and dynamically in different iterations based on , we still select the model which has the best performance on the dev sets as the best model in each run.
Parameter settings: We use grid search to find the best hyper-parameter settings: , in Eq 5, 6 and Eq 1. We also try as the threshold to split and in Eq 5 and 6. Specifically, we treat the instances which contain the artificial patterns in (3,3,) (Sec 2.1, Footnote 2) as , and the remaining instances as . For the results in Table 7, we set and for SNLI and MultiNLI respectively. For both datasets, we set as well as as the threshold for separating the biased and unbiased subsets in Eq 5 and 6. For a fair comparison, we do not tune any hyper-parameter in the InferSent encoder, the learning rate and the optimizer setting. The results of ‘dInferSent’ and its variations in Table 7 comes from 5 independent runs with random initialization.
(a) InferSent trained on SNLI
(b) InferSent trained on MultiNLI
From Table 7, we observe that although the performance gap between the easy and hard subsets is reduced to some extent by the vanilla dInferSent models in both SNLI and MultiNLI. The model still does not reach our expectation to lower the gap between hard and easy sets. We assume this is because the denoising discriminator in Fig 1 somewhat impedes the inference ability of the NLI models as it may disturb the hypothesis sentence encoder especially when the sentences do not contain hypothesis-only bias. The explicit guidance (‘+Guidance’) from the artificial patterns alleviates this issue in both datasets as in this way the discriminator pays more attention on the potentially biased instances thus has smaller influence on the hard instances in the training procedure. These models achieve higher accuracies on the hard subset than the baseline models in both datasets. The ‘reweight’ trick in Sec 3.2.3 greatly reduces the performance gap between the easy and hard sets as it dynamically adjusts the debiasing strategies (i.e. the weight of training instances in Eq 5 and 6).
4 Related Work
The bias in the data annotation exists in many tasks, e.g. lexical inference , visual question answering , ROC story cloze  etc. The NLI models are shown to be sensitive to the compositional features in premises and hypotheses , data permutations [39, 41] and vulnerable to adversarial examples [21, 30, 16] and crafted stress test [15, 32].  showed hypothesis in SNLI has the evidence of gender, racial and religious stereotypes, etc.  analysed the behaviour of NLI models and the factors to be more robust.  discussed how to use partial-input baseline (hypothesis-only classifier in NLI) in future dataset creation.  uses an ensemble-based method to mitigate known bias. The InferSent model, which served as an important baseline in this paper, are found to achieve superb performance on SNLI by word-level heuristics .
 first revealed the difficulties of natural language inference model with bag-of-words models. Different from the artificial patterns we used in this paper, other artifact evidence includes sentence occurrence , syntactic heuristics between hypotheses and premises  and black-box clues derived from neural models [19, 36, 20].
In this study, we show that the hypothesis-only bias in trained NLI models mainly comes from unevenly distributed surface patterns, which could be used to identify hard and easy instances for more convincing re-evaluation on currently overestimated NLI models. The attempts to mitigate the bias are meaningful as such bias not only makes NLI models fragile to adversarial examples. We try to mitigate this bias by removing those artificial patterns in the training sets, with experiments showing that it is a feasible way to alleviate the bias under proper down-sampling methods. We also show that adversarial debiasing with the guidance from the harvested artificial patterns is a feasible way to mitigate the hypothesis-only bias for sentence vector-based NLI models.
We would like to thank the anonymous reviewers for their valuable suggestions. This work is supported by the National Science Foundation of China under Grant No. 61751201, No. 61772040, No. 61876004. The corresponding authors of this paper are Baobao Chang and Zhifang Sui.
6 Bibliographical References
-  (2019) Don’t take the premise for granted: mitigating artifacts in natural language inference. arXiv preprint arXiv:1907.04380. Cited by: §1, §3.2.
-  (2018) Mitigating bias in natural language inference using adversarial learning. Cited by: §1, §3.2.
-  (2017) Pay attention to the ending: strong neural baselines for the ROC story cloze task. In ACL, External Links: Cited by: §4.
-  (2017) Neural natural language inference models enhanced with external knowledge. arXiv preprint arXiv:1711.04289. Cited by: footnote 10.
-  (2017) Enhanced LSTM for natural language inference. In ACL, External Links: Cited by: §1.
-  (2018) Adversarial deep averaging networks for cross-lingual sentiment classification. TACL 6, pp. 557–570. Cited by: §3.2.1, §4.
-  (2019) Don’t take the easy way out: ensemble based methods for avoiding known dataset biases. arXiv preprint arXiv:1909.03683. Cited by: §4.
-  (2017) Supervised learning of universal sentence representations from natural language inference data. In EMNLP, pp. 670–680. External Links: Cited by: §1, §3.2.
-  (2006) The pascal recognising textual entailment challenge. pp. 177–190. Cited by: §1.
-  (2013) Recognizing textual entailment: models and applications. Synthesis Lectures on Human Language Technologies. External Links: Cited by: §1.
-  (2018) Evaluating compositionality in sentence embeddings. arXiv preprint arXiv:1802.04302. Cited by: §4.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
-  (2019) Misleading failures of partial-input baselines. arXiv preprint arXiv:1905.05778. Cited by: §1, §4.
Domain-adversarial training of neural networks. JMLR 17 (1), pp. 2096–2030. Cited by: §3.2.1.
-  (2018) Stress-testing neural models of natural language inference with multiply-quantified sentences. arXiv preprint arXiv:1810.13033. Cited by: §4.
-  (2018) Breaking NLI systems with sentences that require simple lexical inferences. In ACL, pp. 650–655. External Links: Cited by: §4.
-  (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §3.2.1, §4.
-  (2017) Making the V in VQA matter: elevating the role of image understanding in visual question answering. In CVPR, Cited by: §4.
-  (2018) Annotation artifacts in natural language inference data. In NAACL, pp. 107–112. External Links: Cited by: §1, §2.2, §2.4, §3.1.2, §4.
-  (2019) Unlearn dataset bias in natural language inference by fitting the residual. arXiv preprint arXiv:1908.10763. Cited by: §4.
-  (2018) Adversarial example generation with syntactically controlled paraphrase networks. In NAACL, pp. 1875–1885. External Links: Cited by: §4.
-  (2018) Unsupervised adversarial invariance. In NIPS, pp. 5097–5107. Cited by: §4.
-  (2016) Fasttext. zip: compressing text classification models. arXiv preprint arXiv:1612.03651. Cited by: §2.2.
-  (2015) Do supervised distributional methods really learn lexical inference relations?. In NAACL, pp. 970–976. Cited by: §4.
-  (2017) A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Cited by: §1.
-  (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1.
Leveraging gloss knowledge in neural word sense disambiguation by hierarchical co-attention.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1402–1411. Cited by: §1.
-  (2009) Natural language inference. Citeseer. Cited by: §4.
-  (2019) Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007. Cited by: §4.
-  (2018) Adversarially regularising neural NLI models to integrate logical background knowledge. In CoNLL, pp. 65–74. External Links: Cited by: §4.
-  (2018) Evading the adversary in invariant representation. arXiv preprint arXiv:1805.09458. Cited by: §4.
-  (2018) Stress test evaluation for natural language inference. In COLING, pp. 2340–2353. External Links: Cited by: §4.
-  (2019) Analyzing compositionality-sensitivity of NLI models. AAAI. Cited by: §4.
-  (2019) Adversarial nli: a new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. Cited by: §1.
A decomposable attention model for natural language inference. In EMNLP, pp. 2249–2255. External Links: Cited by: §1.
-  (2018) Hypothesis only baselines in natural language inference. In *SEM@NAACL-HLT, pp. 180–191. External Links: Cited by: §1, §2.2, §4.
-  (2017) Social bias in elicited natural language inferences. In EthNLP@EACL, pp. 74–79. External Links: Cited by: §4.
-  (2018) Behavior analysis of nli models: uncovering the influence of three factors on robustness. In EMNLP, Vol. 1, pp. 1975–1985. Cited by: §4.
-  (2018) When data permutations are pathological: the case of neural natural language inference. In EMNLP, pp. 4935–4939. Cited by: §4.
-  (2018) Performance impact caused by hidden bias of training data for recognizing textual entailment. In LREC, Cited by: §1.
-  (2018) What if we simply swap the two text fragments? a straightforward yet effective way to test the robustness of methods to confounding signals in nature language inference tasks. arXiv preprint arXiv:1809.02719. Cited by: §4.
-  (2018) Phrase-level self-attention networks for universal sentence encoding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3729–3738. Cited by: §1.
-  (2017) Controllable invariance through adversarial feature learning. In NIPS, Cited by: §3.2.1, §4.
-  (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1.
-  (2016) Hierarchical attention networks for document classification. In NAACL 2016, pp. 1480–1489. Cited by: §1.
-  (2018) Swag: a large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. Cited by: §1.
-  (2019) Selection bias explorations and debias methods for natural language sentence matching datasets. arXiv preprint arXiv:1905.06221. Cited by: §4.
-  (2017) Aspect-augmented adversarial networks for domain adaptation. TACL 5, pp. 515–528. Cited by: §4.
7 Language Resource References