The use of deep learning has allowed for solving several problems related to natural language processing (NLP), even outperforming human performance in some tasks. However, previous research has shown that neural networks are powerful enough to memorize the training data, which limits their ability to generalize or to really understand the tasks they are dealing with. Moreover, some recent studies propose evaluation scenarios for neural-based models in various natural language understanding (NLU) tasks.
One way to test NLP models is by using adversarial tests, which implies an intentional perturbation of the input sentence to confuse a model into making wrong predictions. This methodology has shown that models are still weak [2, 10, 22, 6]. Other researchers have also shown that language models can “falsely” solve the task. In other words, they might be taking advantage of dataset failures or artifacts on the input sentences in order to guess the answer [7, 1, 13]. These evaluations, also known as “stress tests”, have been performed on classic models based on recurrent networks (RNN). However, transformer-based models such as RoBERTa , XLNet  and BERT , which are state-of-the-art for NLU tasks, have not been systematically evaluated under severe stress conditions. Only BERT has been tested with similar objectives as ours [9, 11, 18], but not in a systematic way as here nor in the same scenarios.
In this work, we focus on three language models based on the state-of-the-art transformer architecture (RoBERTa, XLNet and BERT), with the aim of carrying out a stress test evaluation on two NLU tasks. On the one hand, Natural Language Inference (NLI), also known as recognizing textual entailment (RTE) which consists of finding semantic relations between a premise sentence and an associated hypothesis, by classifying if they are entailed, in contradiction or in neutral relationship. On the other hand, we apply stress tests on a question-answering (QA) task, also known as machine reading comprehension (MRC) which consists of predicting the answer to a question given a paragraph.
The evaluation of the NLI task was performed using the MultiNLI dataset  following the methodology of naik18coling. For the QA task we used the SQuAD dataset  and adversarial techniques introduced by jia-liang-2017-adversarial. We also developed a new adversarial dataset for SQuAD, using techniques inspired on belinkov2018synthetic111we released the dataset at https://github.com/caspillaga/noisy-squad.
All test procedures propose adversarial examples to prove the strength of the models, by distracting, confusing or proving their competence.
Experiments show that all models are affected by stress tests, but on transformer-based models, the adversaries have smaller impact compared to previous models based on RNNs. This behavior could be explained by the large number of parameters and prior training of these models. Nevertheless, in this work we not only measure the impact on performance of various adversarial or noisy conditions, but also reveal that in some cases the state-of-the-art models behave in strange and unexpected ways.
We provide detailed quantitative analysis on all the performed tests, and in some cases we report representative examples via inspection of the attention matrices that these models produce during inference when tested under adversarial test scenarios.
2 Transformer for Natural Language Understanding
The Transformer 
is a deep learning architecture originally proposed to improve the performance of neural machine translation applications. The main idea behind this model is the multi-head self-attention, the ability to attend to different parts and aspects of the input sequence to compute a contextual representation of it, at increasing levels of abstraction (layers). This architecture allows surpassing long-term dependency problems that are common on Recurrent Neural Networks (RNN) models, and adding the possibility of being highly parallelizable.
Early works such as GPT  and BERT  proposed variants of the Transformer architecture for language modeling . These works show that the representations learned on large-scale language modeling datasets are effective for downstream sentence-level tasks (i.e. NLI) and token-level tasks (i.e. QA) via fine-tuning. However, no systematic evaluation of robustness and failure modes for these kind of models (specially the most recent variants) have been performed in previous works, compared to RNNs.
In this work, we evaluate three state-of-the-art models on their large version: BERT , which was the first model to introduce bidirectional representation in the transformer encoder and masked modeling, XLNet  that proposed the permutation modeling to prevent the corruption of the input with masks, and RoBERTa 
, which can be seen as a BERT optimization that includes additional pre-training and hyperparameter improvements.
We use the HuggingFace python library , which includes pre-trained models, in order to fine-tune each model to a classifier for the NLI task and a regressor for the QA task. We used the hyperparameters specified in the original paper for each model, to achieve an accuracy close to the ones reported for each task.
Additionally, we include pre-transformer baselines as a comparison reference. These models are based on the LSTM architecture  and are task-dependent. However, our analysis and discussion are mainly about experiments on transformer-based models.
3 NLI Task Description
The MultiNLI corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information from a broad range of genres. In this task given a premise, the model has to determine whether a hypothesis is true (entailment), false (contradiction), or undetermined (neutral).
As a baseline to evaluate stress test performance for this task, we chose the winner of RepEval 2017 Shared Task 
, which proposed a model of stacked BiLSTMs with residual connections. Also, we used the baseline proposed in the original paper  of the dataset, which consists of a standard BiLSTM.
4 QA Task Description
SQuAD, the Stanford Question Answering Dataset  is a widely used Question Answering benchmark that consists of a collection of English Wikipedia paragraphs with more than 100k associated question-answer pairs generated via crowdsourcing. The task is designed in a way that the solution to each question is literally contained in the corresponding paragraph, so the task is to predict the answer text span in the corresponding passage. We use SQuAD v1.1 instead of SQuAD v2.0 to allow comparability with previous work.
as baselines to compare stress tests against transformer-based models. BiDAF consists of embedding, attention and modeling layers with a BiLSTM, that outputs a vector with information of the context and the query, and finally an output layer with probabilities indicating where the answer starts and ends in the context text. In the case of Match-LSTM, the model is an architecture that remembers important word-level matching results to get better predictions of the answers.
5.1 NLI Task Evaluation
Our experiments on the MultiNLI dataset closely follow the naik18coling procedure, which conducted a stress test evaluation of several models of the RepEval 2017 Shared Task. Below we describe each test set222We use the sets provided by the authors to avoid discrepancy during the procedure. abhilasharavichander.github.io/NLI_StressTest used in this work and Table 1 shows some examples, however for further details of the sets construction we refer the readers to the work by naik18coling.
|Then he ran.||He ran like an athlete and true is true.|
|He ran like an athlete.|
|Negation||Then he ran.||He ran like an athlete and false is not true.|
|Then he ran.||He ran like an athleet.|
|Antonymy||The Joint Venture had justified itself by failure.||The Joint Venture had justified itself by success.|
|Adam spent 1/6 of his lifetime in adolescence.||Adam spent less than 1/6 of his lifetime in adolescence.|
5.1.1 Distraction Test
The distraction test explores the model robustness after a text with a clear ”True” value is added.
One way to evaluate this is by decreasing the lexical similarity between premise and hypothesis. On the one hand, the word overlay set adds a tautology (“and true is true”) at the end of each hypothesis sentence. On the other hand, the length mismatch set adds five times the same tautology to each premise.
We can also evaluate this by the inclusion of strong negations. The negation set is quite similar to the previous ones, but in this case, the tautology added to the hypothesis includes negation words (“and false is not true”).
5.1.2 Noise Test
This test verifies the model strength against noisy data, in terms of spelling errors. It has two types of permutations on a word randomly selected from the hypothesis: swap of adjacent characters within the word, and random substitution of a character next to it on the English keyboard. Note that only one substitution is performed for the entire sentence.
5.1.3 Competence Test
The competence test consists of two evaluation sets to measure the reasoning ability of the models.
Understanding of antonymy relationships. This set includes sentences that result in contradiction simply by using an antonym in some adjectives or nouns.
Numerical reasoning ability of a model. This evaluation includes statements of simple algebraic problems with solutions as premises
. The entailed, contradictory and neutral hypotheses were generated through the use of heuristic rules.
|Distraction Test||Noise Test||Competence Test|
5.2 NLI Task Results
Table 2 shows the results of the performed tests. It can be seen that all models decrease their accuracy in all evaluations. However, transformer-based models show more robustness in some tests. The analysis of the results of the models in each stress test is shown on the following sections.
5.2.1 Models Performance on Distraction Test
Figure 1 shows a bar graph of the “matched” partition of the evaluation sets on the different types of distraction tests. As mentioned in a previous section, the distraction tests allow us to check the robustness in two different ways.
On the one hand, the effect of introducing negation words drops the models performance below 60% of accuracy, close to the baselines. We checked the model predictions on the negation test v/s the development set and we found that BERT and XLNet obtained 93% and 91% of E-N (entailment predicted as neutral) error respectively. In contrast, RoBERTa obtained 85% of N-E error (neutral predicted as entailment). This could occur due to the introduction of extra negation words (“false” and “not”).
On the other hand, the decrease of lexical similarity by word overlap and length mismatch evaluation shows:
In the first case (word overlap set), the transformer-based models reach around 60% accuracy, which is approximately 20% less than in the development set. We found a similar behavior with the previous set (negation), where BERT and XLNet obtained 83% and 61% of E-N error respectively. It also stands out that RoBERTa achieved 89% of N-E error.
In the second case (length mismatch), the models performed better than expected, because they reached almost the same accuracy as in the development set. We hypothesize that these results may be due to the length mismatch set modifying the premise sentence instead of the hypothesis as in the negation of the word overlap sets, which suggests that in order to answer the model is paying more attention to that sentence.
To verify the results on the length mismatch set, we extended the evaluation by testing the addition of the tautology “and true is true” in the hypothesis or in the premises times (where ). Figure 2 shows the performance of XLNet in these tests, likewise we observed similar behavior on the other models. We noticed that the inclusion of the distractions to the premise sentence does not affect the model performance. However, when we add the tautology a single time (which is equivalent to the word overlap test) to the hypothesis sentence, the performance drops about 20%, but the more repetitions we add, the more accuracy increases, almost reaching the same performance obtained in the development set. We also checked the attention weights, and did not identify anomalous behavior.
The unexpected result in accuracy indicates that the lexical similarity is not a strong enough signal to generate distraction in this type of model; in other words, the model can discern the tautologies. Moreover, the model seems to pay more attention to the hypothesis sentence to perform this task, without discarding the premise. However, the distraction evaluation indicates that these transformer-based models are fragile to adversarial attacks that include strong negation words.
5.2.2 Models Performance on Noise Test
The noise test with the spelling error set exhibits that transformer-based models perform very well. They only lose between 2 to 5 percentage points in accuracy with respect to the development set. The results suggest that the multi-head self-attention mechanism of these models is very effective at recovering the global information from the corrupted sentence.
However, the adversarial attacks of this set only modify one word of the hypothesis. This explains why there is no sudden drop in performance in models, even for the BiLSTM-based models.
5.2.3 Models Performance on Competence Test
As we supposed, transformer-based models work quite well in this evaluation task. In the case of the antonymy test, the models exceeded baselines by approximately 50 percentage points in accuracy. This is probably because transformers were pre-trained on a diverse and big corpus, allowing them to adequately represent the majority of the words of the dictionary. XLNet and BERT were trained with BookCorpus and Wikipedia, so we expected better accuracy of RoBERTa which used additional data. However, XLNet outperformed others by at least 10 percentage points, suggesting that permutation modeling could help capture antonymy relationships better.
Furthermore, the results on the numerical reasoning evaluation show a lower performance for all models. In this task, XLNet and RoBERTa have similar accuracy but have different behavior. On the one hand, XLNet specialized in classifying the “entailment”, achieving 90% in that class. On the other hand, RoBERTa specialized in “neutral” category, obtaining 89% of correct answers. In both cases, the remaining classes achieved less than 74% of accuracy (the model finds it hard to distinguish between those classes). These results indicate that transformer-based models trained in the NLI task have serious difficulties in numerical reasoning and that they take different strategies to solve the task.
For both evaluations, we also explored the attention weights via the BertViz library . Appendix B shows a brief analysis of some specific cases on all the mentioned transformer-based models.
5.2.4 Annotation Artifacts Exploitation Test
gururangan-etal-2018-annotation found that MultiNLI dataset has annotation artifacts. It means that crowd workers who participated in the creation of the data, adopted heuristics to generate the hypothesis in an easy and fast way. For instance, they usually use some keywords such as “not”, “never”, etc. to create negation sentences.
To evaluate if transformer-based models leverage the artifacts, we tested the models by removing the premise sentence in the development set. In other words, the models are unaware of the premises of the dataset.
Table 3 shows the results of this experiment. It is possible to see that transformer-based models perform similar to the majority class333The majority class is used as a baseline of random guessing., which denotes an unbiased guess of the models. In contrast, BiLSTM-based models show significant proportion of correctly classified samples without even looking at the premise (which is an undesirable behavior). This result demonstrates that transformer-based models are in fact learning to take into account and relate the two sentences of the NLI task in order to choose the correct answer, which is consistent with the findings in Section 5.2.1
5.3 QA Task Evaluation
One of our test scenarios was taken from jia-liang-2017-adversarial, which intentionally adds a new adversarial sentence at the end of SQuAD passages of the development set. These sentences are especially designed (via different strategies) to act as a decoy to confuse the model. The other test scenario is inspired on belinkov2018synthetic. Although originally proposed for a different task, we replicated the 5 types of noise proposed by the authors, and applied them on the development set of SQuAD.
5.3.1 Adversarial Sentence Tests
In jia-liang-2017-adversarial, the authors proposed 4 strategies to create a sentence especially designed to confuse models by pretending to be the correct answer to a specific question, although they are unrelated with the question. This adversarial sentence is concatenated to the corresponding paragraph provided at test time. The 4 strategies proposed were:
AddOneSent: Adjectives and nouns of the question are replaced by antonyms. Named entities and numbers are replaced by their nearest word in GloVe . This modified question is then turned into declarative form (using a set of manually defined rules) and a fake answer of the same type as the original answer is inserted. Finally the sentence is manually checked and fixed via crowdsourcing.
AddSent: Identical to AddOneSent but generating multiple candidate sentences (adversaries) and keeping only the one that induces the biggest error when tested on a specific model.
AddAny: The adversarial sentence is generated by sampling random words and successively replacing them by elements from a sampled set of words each time. Words are selected from this set by using a criterion that tries to minimize the confidence of the model on the correct answer. The 20-word set is sampled from a list of common words plus the words from the question. This process is repeated iteratively 6 times for each adversarial phrase.
AddCommon: Identical to AddAny, but in this case the 20-word set is sampled from the list of common words directly.
5.3.2 Noise Tests
Although originally proposed for a different task, we replicated the 5 types of noise introduced by belinkov2018synthetic. In each experiment, a specific noise type was applied to each word in the passage of SQuAD’s development set. The question was kept unchanged, and the answers were adapted to preserve consistency with the modified passage. In contrast to the noise tests performed in the NLI setting (Section 5.1.2), the scenario tested here is significantly more aggressive because it introduces noise to every word in the reference text.
The 5 noise types tested are:
Swap Noise: For each word in the text, one random pair of consecutive characters is swapped (e.g. ).
Middle Random Noise: For each word in the text, all characters are shuffled, except for the first and last characters. (e.g. ).
Fully Random Noise: For each word in the text, all characters are shuffled (e.g. ).
Keyboard Typo Noise: For each word in the text, one character is replaced by an adjacent character in traditional English keyboards (e.g. ).
5.4 QA Task Results
Similarly to the observations for the NLI experiments, for QA it is clear that the performance of all models is affected by the stress tests, with transformer-based models being the most robust in all the cases analyzed.
5.4.1 Results on Adversarial Sentence Tests
Figure 5 shows a bar graph that compares the accuracy of the tested models under the different adversarial strategies.
When we analyze the results of the AddOneSent experiments, we notice an accuracy reduction between and for the transformer-based models, and greater than for non-transformer models. In spite of showing greater robustness in comparison with their counterpart, transformer-based models still suffer from a significant impact on performance, which elucidates a clear opportunity for future improvements on these kind of models. The same phenomenon is observed for AddSent adversaries, but more pronounced (as expected, since AddSent tests the worst case for each candidate question). We see accuracy reductions ranging from and for transformer-based models, and greater than for non-transformer models.
We notice that as the model is more powerful in the main task (accuracy in the unmodified SQuAD 1.1 development set), it also achieves greater robustness. This conclusion is hopeful because other works have asserted that more powerful models could justify their performance on their higher capacity for memorization . These experiments, in contrast, indicate that the models are improving their reading capabilities in a balanced fashion.
Interestingly, AddAny and AddCommon adversaries show that those strategies are very model-specific, as evidenced by the fact that transformer-based models only reduce their accuracy in small degree when tested against adversaries where other models failed. These results are interesting because, as reported by jia-liang-2017-adversarial, those adversaries (and especially AddAny) turned to be very effective when trying to mislead the models that they were targeting. This cross-check between different model’s adversaries for AddAny is consistent with the results reported by jia-liang-2017-adversarial, although in the case of transformer-based models, the before-mentioned behavior is even more pronounced. For the case of AddCommon, in the other hand, this tests were not reported in previous work nor analyzed by the authors that proposed these adversaries, thus this finding is especially relevant.
Further details on the results of every experiment performed can be found in Appendix A. Also in Appendix C we perform a more qualitative analysis of the attention matrices that these models produce during inference.
5.4.2 Results on Noise Tests
As shown in Figure 6, all five types of noise have a significant negative impact on accuracy on all the tested models. The accuracy reduction is more prominent than on Adversarial Sentence tests (Section 5.4.1) due to the aggressiveness of the strategies tested here.
Swap Noise has a significant impact on accuracy (between and ), although only a single pair of characters per word are altered. Performance is only slightly better than when using Middle Random Noise (and in that scenario, all the letters are shuffled, except for the first and last characters). We hypothesize that this is due to the fact that by introducing this change, the resulting tokenizations differ significantly from the original ones and are also very different from the ones seen in training or fine-tuning, and thus the model is not prepared to answer accurately.
Note also that, in absolute terms, under Middle Random noise, the model is still able to correctly answer one in four questions, despite the fact that the text is severely transformed (for an example see Figure 4).
Another interesting pattern that these tests showed is the fact that for transformer-based models the Keyboard Typo noise is clearly more difficult to deal with than Swap Noise. This finding is especially interesting because Keyboard Typo noise corrupts only one character for each word, and Swap Noise corrupts two. For this reason this result is opposed to what we expected, and reveals that swapping operations affect these models less than replacement operations. This effect may be caused by the fact that the tokenized representation of words with swapped characters might be closer to the original one (in the embedding space of each model), or maybe it is because this kind of noise might be more frequent in real misspellings than keyboard typos, and thus the models were more exposed to this kind of noise during pre-training. Further study is required to find out which phenomenon is the dominant one in this case, but this analysis is out of the scope of this work.
Similarly to what was reported in belinkov2018synthetic, Natural Noise is significantly easier to overcome than the other 4 tested noise types, even considering that in the dataset we built for Natural Noise
, we forcedly replaced every word by a noisy version of it (when real typing errors were available). It is natural to think that in real scenarios, misspelled words will appear at a much lower rate than in this test. Thus this result can be seen as a kind of lower-bound estimator for performance onNatural Noise in real scenarios. When we compare the result of the Natural Noise experiments with those of the Swap Noise experiments, we hypothesize that the gap in favour to Natural Noise is because during the pre-training phase, the model observed this type of noise (in real occurrences) and was therefore able to learn useful representations both for well-written words and for versions with common misspellings.
6 Related Work and Discussion
Prior work  discusses the importance of evaluation frameworks that allow characterizing model success and failures. During previous years, several approaches to test NLP models have been proposed on various tasks, showing that most of the time, predictions are memorized without really understanding the real meaning of utterances.
Early research demonstrated that NLP models are fragile to input perturbations. Some attempts at performing stress tests on machine translation systems demonstrated that by adding small perturbations on the input text, the general performance of language models could be profoundly affected [2, 22, 6]
. In the same line, the inspiring work of jia-liang-2017-adversarial proposed an evaluation procedure for language models using the SQuAD dataset. They used SQuAD samples, concatenating adversarial sentences at the end of the paragraph that contains the answer, and showed that 14 open-source models failed when these changes are introduced.
Other relevant findings reveal that models take advantage of lexical cues of the dataset, allowing them to solve the problem falsely. gururangan-etal-2018-annotation observed that some NLI datasets have annotation artifacts that models efficiently use to predict the answer without even seeing the rest of the sentence. The same problem was found in the Visual Question Answering (VQA) field. agrawal-etal-2016-analyzing analyzed the behavior of three models based on CNN, LSTM, and attention mechanism by adding adversaries only to the caption of the image, obtaining that most of the times models were paying attention to the text and not the image at inference time.
Although there is considerable progress in this area, it can be seen that this article differentiates from previous works by systematically evaluating adversaries, artifacts and various severe stress conditions on the state-of-the-art language models based on transformer (BERT and the models that came after it), in order to verify their language comprehension capabilities and generalization power.
We conducted a stress test evaluation for transformer-based language models in NLI and QA tasks. In general, our experiments indicate that applying stress tests influenced the performance of all models, but as expected, more recent models such as XLNet and RoBERTa are more robust, showing a better response to this evaluation.
In the NLI task, we verified that distraction and noise sets significantly reduce the performance of all models. However, concerning the competency test, the models perform better because they were pre-trained for this particular task.
Moreover, in the QA task, experiments revealed that all models suffer in performance when tested with adversarial or noisy samples. Despite this, transformer-based models turned out to be more robust than their predecessors. We compared transformer-based models against each and observed that while improving in the main task, models also improved in their robustness in a balanced way. We also noticed that some adversaries are model-specific, as they affect one model but not the rest. Specifically, in the noise tests, we observed that the robustness trend also holds, but noticed some unexpected behavior in relative analysis, as some types of noise affect the models more severely than others, thus revealing specific weak points across all transformer-based models that did not seem evident at first sight.
We consider this evaluation to be valuable to the community because it exhibits the strengths and weaknesses of the state-of-the-art models. We argue that it is vital that models pass behavioral checks to ensure proper performance in extreme scenarios, where data failures are not being considered. Taking this into consideration, we see that there is still room for future improvements on transformer-based models.
8 Bibliographical References
-  (2016-11) Analyzing the behavior of visual question answering models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 1955–1960. External Links: Cited by: §1.
-  (2018) Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations, External Links: Cited by: §1, §6.
-  (2001) A neural probabilistic language model. In Advances in Neural Information Processing Systems 13, T. K. Leen, T. G. Dietterich, and V. Tresp (Eds.), pp. 932–938. External Links: Cited by: §2.
-  (2018) Universal transformers. arXiv preprint arXiv:1807.03819. Cited by: §6.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §2, §2.
-  (2018-08) On adversarial examples for character-level neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA, pp. 653–663. External Links: Cited by: §1, §6.
-  (2018-06) Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 107–112. External Links: Cited by: §1.
-  (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.
On the robustness of self-attentive models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 1520–1529. External Links: Cited by: §1.
-  (2018-06) Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 1875–1885. External Links: Cited by: §1.
-  (2019) Is bert really robust? a strong baseline for natural language attack on text classification and entailment. External Links: Cited by: §1.
-  (2008) Loop summarization using abstract transformers. In International Symposium on Automated Technology for Verification and Analysis, pp. 111–125. Cited by: §6.
-  (2015-May–June) Do supervised distributional methods really learn lexical inference relations?. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, pp. 970–976. External Links: Cited by: §1.
-  (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, §2.
-  (2010-05) Mining naturally-occurring corrections and paraphrases from Wikipedia’s revision history. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta. External Links: Cited by: 1st item.
-  (2017-09) The RepEval 2017 shared task: multi-genre natural language inference with sentence representations. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, Copenhagen, Denmark, pp. 1–10. External Links: Cited by: §3.2.
-  (2017-09) Shortcut-stacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, Copenhagen, Denmark, pp. 41–45. External Links: Cited by: §3.2.
-  (2019) Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355. Cited by: §1, §6.
-  (2014-10) Glove: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1532–1543. External Links: Cited by: 1st item.
-  (2018) Improving language understanding by generative pre-training. Cited by: §2.
-  (2016-11) SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. External Links: Cited by: §1, §4.1.
-  (2018-07) Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 856–865. External Links: Cited by: §1, §6.
-  (2017) CzeSL grammatical error correction dataset (CzeSL-GEC). Note: LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University External Links: Cited by: 1st item.
-  (2016) Bidirectional attention flow for machine comprehension. ArXiv abs/1611.01603. Cited by: §4.2.
-  (2012) Adversarial evaluation for models of natural language. arXiv preprint arXiv:1207.0245. Cited by: §6.
-  (2018) Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416. Cited by: §6.
-  (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2, §6.
-  (2019) A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714. External Links: Cited by: §5.2.3.
-  (2016) Machine comprehension using match-lstm and answer pointer. ArXiv abs/1608.07905. Cited by: §4.2.
-  (2018) A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122. External Links: Cited by: §1, §3.2.
-  (2013) MERLIN: an online trilingual learner corpus empirically grounding the European Reference Levels in authentic learner data. Note: URL https://www.ukp.tu-darmstadt.de/data/spelling-correction/rwse-datasets External Links: Cited by: 1st item.
-  (2019) HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv abs/1910.03771. Cited by: §2.
-  (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1, §2.
-  (2012-04) Measuring contextual fitness using error contexts extracted from the Wikipedia revision history. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, Avignon, France, pp. 529–538. External Links: Cited by: 1st item.
-  (2017) Understanding deep learning requires rethinking generalization. External Links: Cited by: §1, §5.4.1.
Appendix A: Detailed Results on SQuAD Tests
In this section, we report the detailed results of all the experiments performed on the adversarial versions of the SQuAD dataset. In all the experiments, each model was trained/fine-tuned on the original SQuAD v1.1 training set, and tested on each one of the generated adversarial datasets. Table 4 shows the results on the adversarial proposed by jia-liang-2017-adversarial and Table 5 reports the results of the tests using different noise types inspired on belinkov2018synthetic. As a result, we see that all models are affected by these adversarial samples, but also found that some adversaries are model-specific because they do not affect all models as much as they affect the model they are targeting.
|Model under Evaluation|
|Original (for reference only)|
|Original (for reference only)|
|Middle Random Noise|
|Fully Random Noise|
|Keyboard Typo Noise|
Appendix B: Attention-level Results of NLI Task
For this analysis, we took a representative adversarial example where a word in the sentence was replaced by its antonym. The model is asked to decide if there is a contradiction, neutral, or entailment relationship between them. We expect the model to connect the attention between the replaced words to predict the correct answer. Assume the following pair of sentences:
I saw that daylight was coming, and heard the people
I saw that daylight was coming, and heard the people waking up.
In this representative example for testing antonyms, we computed the attentions produced by XLNet, RoBERTa, and BERT. We checked the layers and heads where a clear attention pattern was present between the word and its antonym, as shown in Figures Antonymy Evaluation - Antonymy Evaluation. Within this particular case, for XLNet, we saw that only 2.86% of the total attention heads and layers had this pattern. For RoBERTa, this number was 2.60%, and for BERT 1.56%. On the other hand, for all models, most of the attention was paid to separators and all words from the reference sentence without distinction (Figure Antonymy Evaluation).
Numerical Reasoning Evaluation
For samples of numerical reasoning for NLI, the expectation is that the model should pay attention to words like ”more” or ”less” to check if there is a change in numerical references. Assume the following pair of sentences:
The next day Bob took the test and with this grade, included the new average,
was more than 48.
The next day Bob took the test and with this grade, included the new average, was 78.
Nevertheless, for this testing example, the premise includes ”more than 48” and the hypothesis replaces this last part by ”78”, but all the models (XLNet, RoBERTa and BERT) incorrectly predicted ”contradiction”. We observed that the expected pattern (shown in Figures Numerical Reasoning Evaluation- Numerical Reasoning Evaluation) is a very infrequent pattern for all models (for XLNet it appeared in 5.20% of the cases, for RoBERTa in only 4.42% and for BERT this percentage was 1.30%). For other cases, they focused on sentence separators (as shown in Fig Numerical Reasoning Evaluation).
Appendix C: Attention-level Results of QA Task
QA task attention-level evaluation
For the QA task, we manually inspected failure cases to see the amount of attention the model paid to the introduced adversaries versus to the correct answer. Here we show one representative example of a ”what” question:
Question: What company took over Edison Machine works?.
Answer: General Electric.
Adversary: Stark Industries.
In this particular example, with the question ”What company took over Edison Machine works?”, the correct answer was ”General Electric”, and the artificially introduced adversary was ”Stark Industries”, appended at the end of the context of the original sample.
All models fell into the same trap. It can be seen in Figures QA task attention-level evaluation- QA task attention-level evaluation that they paid attention to the wrong answer. In this case, this pattern appeared in 52% of the layer-heads of XLNet, 60% in the case of RoBERTa, and 30% on BERT. Nevertheless, while checking the level of certainty of each model in the predicted wrong answer for this example, XLNet had a 43.3% certainty probability, 75.5 % BERT, and the most mistaken was RoBERTa with a 99.9% certainty probability for predicting the wrong answer (which is consistent with the sharpness of attention in Figure QA task attention-level evaluation). This behavior provides evidence that the three models behave slightly different and that increased accuracy in the main task (before adversarial evaluation) is no direct indicator of increased robustness in all cases, but only in the average case.