An open problem in Artificial Intelligence is quantifying the extent to which algorithms exhibit intelligent behaviour(Levesque, 2014)
. In Machine Learning, a standard procedure consists in estimating thegeneralisation error, i.e. the prediction error over an independent test sample (Hastie et al., 2001). However, machine learning models can succeed simply by recognising patterns that happen to be predictive on instances in the test sample, while ignoring deeper phenomena (Rimell and Clark, 2009; Paperno et al., 2016).
. In Natural Language Processing (NLP) and Machine Reading, generating adversarial examples can be really useful for understanding the shortcomings of NLP models(Jia and Liang, 2017; Kannan and Vinyals, 2017) and for regularisation (Minervini et al., 2017).
In this paper, we focus on the problem of generating adversarial examples for Natural Language Inference (NLI) models in order to gain insights about the inner workings of such systems, and regularising them. NLI, also referred to as Recognising Textual Entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Dagan et al., 2005), is a central problem in language understanding (Katz, 1972; Bos and Markert, 2005; van Benthem, 2008; MacCartney and Manning, 2009), and thus it is especially well suited to serve as a benchmark task for research in machine reading. In NLI, a model is presented with two sentences, a premise and a hypothesis , and the goal is to determine whether semantically entails .
The problem of acquiring large amounts of labelled data for NLI was addressed with the creation of the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) datasets. In these processes, annotators were presented with a premise drawn from a corpus, and were required to generate three new sentences (hypotheses) based on , according to the following criteria: a) Entailment – is definitely true given ( entails ); b) Contradiction – is definitely not true given ( contradicts ); and c) Neutral – might be true given . Given a premise-hypothesis sentence pair
, a NLI model is asked to classify the relationship betweenand – i.e. either entailment, contradiction, or neutral. Solving NLI requires to fully capture the sentence meaning by handling complex linguistic phenomena like lexical entailment, quantification, co-reference, tense, belief, modality, and lexical and syntactic ambiguities (Williams et al., 2017).
In this work, we use adversarial examples for: a) identifying cases where models violate existing background knowledge, expressed in the form of logic rules, and b) training models that are robust to such violations.
The underlying idea in our work is that NLI models should adhere to a set of structural constraints that are intrinsic to the human reasoning process. For instance, contradiction is inherently symmetric: if a sentence contradicts a sentence , then contradicts as well. Similarly, entailment is both reflexive and transitive. It is reflexive since a sentence is always entailed by (i.e. is true given) . It is also transitive, since if is entailed by , and is entailed by , then is entailed by as well.
Example 1 (Inconsistency).
Consider three sentences , and each describing a situation, such as: a) “The girl plays”, b) “The girl plays with a ball”, and c) “The girl plays with a red ball”. Note that if is entailed by , and is entailed by , then also is entailed by . If a NLI model detects that entails , entails , but does not entail , we know that it is making an error (since its results are inconsistent), even though we may not be aware of the sentences , , and and the true semantic relationships holding between them. ∎
Our adversarial examples are different from those used in other fields such as computer vision, where they typically consist in small, semantically invariant perturbations that result in drastic changes in the model predictions. In this paper, we propose a method for generating adversarial examples that cause a model to violate pre-existing background knowledge (Section 4), based on reducing the generation problem to a combinatorial optimisation problem. Furthermore, we outline a method for incorporating such background knowledge into models by means of an adversarial training procedure (Section 5).
Our results (Section 8) show that, even though the proposed adversarial training procedure does not sensibly improve accuracy on SNLI and MultiNLI, it yields significant relative improvement in accuracy (up to 79.6%) on adversarial datasets. Furthermore, we show that adversarial examples transfer across models, and that the proposed method allows training significantly more robust NLI models.
Neural NLI Models.
In NLI, in particular on the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) datasets, neural NLI models – end-to-end differentiable models that can be trained via gradient-based optimisation – proved to be very successful, achieving state-of-the-art results (Rocktäschel et al., 2016; Parikh et al., 2016; Chen et al., 2017).
Let denote the set of all possible sentences, and let and denote two input sentences – representing the premise and the hypothesis – of length and , respectively.
In neural NLI models, all words and are typically represented by -dimensional embedding vectors
embedding vectors. As such, the sentences and can be encoded by the sentence embedding matrices and , where the columns and respectively denote the embeddings of words and .
Given two sentences , the goal of a NLI model is to identify the semantic relation between and , which can be either entailment, contradiction, or neutral
. For this reason, given an instance, neural NLI models compute the following conditional probability distribution over all three classes:
where is a model-dependent scoring function with parameters , and denotes the softmax function.
Several scoring functions have been proposed in the literature, such as the conditional Bidirectional LSTM (cBiLSTM) (Rocktäschel et al., 2016)
, the Decomposable Attention Model (DAM)(Parikh et al., 2016), and the Enhanced LSTM model (ESIM) (Chen et al., 2017). One desirable quality of the scoring function is that it should be differentiable with respect to the model parameters , which allows the neural NLI model to be trained from data via back-propagation.
Let represent a NLI dataset, where denotes the -th premise-hypothesis sentence pair, and their relationship, where is the number of possible relationships – in the case of NLI, . The model is trained by minimising a cross-entropy loss on :
where denotes the probability of class on the instance inferred by the neural NLI model as in Eq. 1.
In the following, we analyse the behaviour of neural NLI models by means of adversarial examples – inputs to machine learning models designed to cause the model to commit mistakes. In computer vision models, adversarial examples are created by adding a very small amount of noise to the input (Szegedy et al., 2014; Goodfellow et al., 2014): these perturbations do not change the semantics of the images, but they can drastically change the predictions of computer vision models. In our setting, we define an adversary whose goal is finding sets of NLI instances where the model fails to be consistent with available background knowledge, encoded in the form of First-Order Logic (FOL) rules. In the following sections, we define the corresponding optimisation problem, and propose an efficient solution.
3 Background Knowledge
For analysing the behaviour of NLI models, we verify whether they agree with the provided background knowledge, encoded by a set of FOL rules. Note that the three NLI classes – entailment, contradiction, and neutrality – can be seen as binary logic predicates, and we can define FOL formulas for describing the formal relationships that hold between them.
In the following, we denote the predicates associated with entailment, contradiction, and neutrality as , , and , respectively. By doing so, we can represent semantic relationships between sentences via logic atoms. For instance, given three sentences , we can represent the fact that entails and contradicts by using the logic atoms and .
Let be a set of universally quantified variables. We define our background knowledge as a set of FOL rules, each having the following form:
where and represent the premise and the conclusion of the rule – if holds, holds as well. In the following, we consider the rules outlined in Table 1. Rule enforces the constraint that entailment is reflexive; rule that contradiction should always be symmetric (if contradicts , then contradicts as well); rule that entailment is transitive; while rules and describe the formal relationships between the entailment, neutral, and contradiction relations.
In Section 4 we propose a method to automatically generate sets of sentences that violate the rules outlined in Table 1 – effectively generating adversarial examples. Then, in Section 5 we show how we can leverage such adversarial examples by generating them on-the-fly during training and using them for regularising the model parameters, in an adversarial training regime.
4 Generating Adversarial Examples
In this section, we propose a method for efficiently generating adversarial examples for NLI models – i.e. examples that make the model violate the background knowledge outlined in Section 3.
4.1 Inconsistency Loss
We cast the problem of generating adversarial examples as an optimisation problem. In particular, we propose a continuous inconsistency loss that measures the degree to which a set of sentences causes a model to violate a rule.
Example 2 (Inconsistency Loss).
Consider the rule in Table 1, i.e. . Let be two sentences: this rule is violated if, according to the model, a sentence contradicts , but does not contradict . However, if we just use the final decision made by the neural NLI model, we can simply check whether the rule is violated by two given sentences, without any information on the degree of such a violation.
Intuitively, for the rule being maximally violated, the conditional probability associated to should be very high (), while the one associated to should be very low (). We can measure the extent to which the rule is violated – which we refer to as inconsistency loss – by checking whether the probability of the body of the rule is higher than the probability of its head:
where is a substitution set that maps the variables and in to the sentences and , , and is the (conditional) probability that contradicts according to the neural NLI model. Note that, in accordance with the logic implication, the inconsistency loss reaches its global minimum when the probability of the body is close to zero – i.e. the premise is false – and when the probabilities of both the body and the head are close to one – i.e. the premise and the conclusion are both true. ∎
Furthermore, let denote a substitution set, i.e. a mapping from variables in to sentences . The inconsistency loss associated with the rule on the substitution set can be defined as:
where and denote the probability of body and head of the rule, after replacing the variables in with the corresponding sentences in . The motivation for the loss in Eq. 4 is that logic implications can be understood as “whenever the body is true, the head has to be true as well”. In terms of NLI models, this translates as “the probability of the head should at least be as large as the probability of the body”.
For calculating the inconsistency loss in Eq. 4, we need to specify how to calculate the probability of and . The probability of a single ground atom is given by querying the neural NLI model, as in Eq. 1. The head contains a single atom, while the body can be a conjunction of multiple atoms. Similarly to Minervini et al. (2017), we use the Gödel t-norm, a continuous generalisation of the conjunction operator in logic (Gupta and Qi, 1991), for computing the probability of the body of a clause:
where and are two clause atoms.
In this work, we cast the problem of generating adversarial examples as an optimisation problem: we search for the substitution set that maximises the inconsistency loss in Eq. 4, thus (maximally) violating the available background knowledge.
4.2 Constraining via Language Modelling
Maximising the inconsistency loss in Eq. 4 may not be sufficient for generating meaningful adversarial examples: they can lead neural NLI models to violate available background knowledge, but they may not be well-formed and meaningful.
For such a reason, in addition to maximising the inconsistency loss, we also constrain the perplexity of generated sentences by using a neural language model (Bengio et al., 2000). In this work, we use a LSTM (Hochreiter and Schmidhuber, 1997) neural language model for generating low-perplexity adversarial examples.
4.3 Searching in a Discrete Space
As mentioned earlier in this section, we cast the problem of automatically generating adversarial examples – i.e. examples that cause NLI models to violate available background knowledge – as an optimisation problem. Specifically, we look for substitutions sets that jointly: a) maximise the inconsistency loss described in Eq. 4, and b) are composed by sentences with a low perplexity, as defined by the neural language model in Section 4.2.
The search objective can be formalised by the following optimisation problem:
where denotes the log-probability of the sentences in the substitution set , and is a threshold on the perplexity of generated sentences.
For generating low-perplexity adversarial examples, we take inspiration from Guu et al. (2017) and generate the sentences by editing prototypes extracted from a corpus. Specifically, for searching substitution sets whose sentences jointly have a high probability and are highly adversarial, as measured the inconsistency loss in Eq. 4, we use the following procedure: a) we first sample sentences close to the data manifold (i.e. with a low perplexity), by either sampling from the training set or from the language model; b) we then make small variations to the sentences – analogous to adversarial images, which consist in small perturbations of training examples – so to optimise the objective in Eq. 5.
When editing prototypes, we consider the following perturbations: a) change one word in one of the input sentences; b) remove one parse sub-tree from one of the input sentences; c) insert one parse sub-tree from one sentence in the corpus in the parse tree of one of the input sentences.
Note that the generation process can easily lead to ungrammatical or implausible sentences; however, these will be likely to have a high perplexity according to the language model (Section 4.2), and thus they will be ruled out by the search algorithm.
5 Adversarial Regularisation
We now show one can use the adversarial examples to regularise the training process. We propose training NLI models by jointly: a) minimising the data loss (Eq. 2), and b) minimising the inconsistency loss (Eq. 4) on a set of generated adversarial examples (substitution sets).
More formally, for training, we jointly minimise the cross-entropy loss defined on the data and the inconsistency loss on a set of generated adversarial examples , resulting in the following optimisation problem:
is a hyperparameter specifying the trade-off between the data loss(Eq. 2), and the inconsistency loss (Eq. 4), measured on the generated substitution set .
In Eq. 6, the regularisation term has the task of generating the adversarial substitution sets by maximising the inconsistency loss. Furthermore, the constraint ensures that the perplexity of generated sentences is lower than a threshold . For this work, we used the aggregation function. However, other functions can be used as well, such as the sum or mean of multiple inconsistency losses.
For minimising the regularised loss in Eq. 6, we alternate between two optimisation processes – generating the adversarial examples (Eq. 5) and minimising the regularised loss (Eq. 6). The algorithm is outlined in Algorithm 1. At each iteration, after generating a set of adversarial examples , it computes the gradient of the regularised loss in Eq. 6, and updates the model parameters via a gradient descent step. On line 6, the algorithm generates a set of adversarial examples, each in the form of a substitution set . On line 9, the algorithm computes the gradient of the adversarially regularised loss – a weighted combination of the data loss in Eq. 2 and the inconsistency loss in Eq. 4. The model parameters are finally updated on line 11 via a gradient descent step.
6 Creating Adversarial NLI Datasets
|Premise||A man in a suit walks through a train station.|
|Hypothesis||Two boys ride skateboard.|
|Premise||Two boys ride skateboard.|
|Hypothesis||A man in a suit walks through a train station.|
|Premise||Two people are surfing in the ocean.|
|Hypothesis||There are people outside.|
|Premise||There are people outside.|
|Hypothesis||Two people are surfing in the ocean.|
We crafted a series of datasets for assessing the robustness of the proposed regularisation method to adversarial examples. Starting from the SNLI test set, we proceeded as follows. We selected the instances in the SNLI test set that maximise the inconsistency loss in Eq. 4 with respect to the rules in , , , and in Table 1. We refer to the generated datasets as , where identifies the model used for selecting the sentence pairs, and denotes number of examples in the dataset.
For generating each of the datasets, we proceeded as follows. Let be a NLI dataset (such as SNLI), where each instance is a premise-hypothesis sentence pair, and denotes the relationship holding between and . For each instance , we consider two substitution sets: and , each corresponding to a mapping from variables to sentences.
We compute the inconsistency score associated to each instance in the dataset as . Note that the inconsistency score only depends on the premise and hypothesis in each instance , and it does not depend on its label .
After computing the inconsistency scores for all sentence pairs in using a model , we select the instances with the highest inconsistency score, we create two instances and , and add both and to the dataset . Note that, while is already known from the dataset , is unknown. For this reason, we find by manual annotation.
7 Related Work
Adversarial examples are receiving a considerable attention in NLP; their usage, however, is considerably limited by the fact that semantically invariant input perturbations in NLP are difficult to identify (Buck et al., 2017).
Jia and Liang (2017) analyse the robustness of extractive question answering models on examples obtained by adding adversarially generated distracting text to SQuAD (Rajpurkar et al., 2016) dataset instances. Belinkov and Bisk (2017) also notice that character-level Machine Translation are overly sensitive to random character manipulations, such as typos. Hosseini et al. (2017) show that simple character-level modifications can drastically change the toxicity score of a text. Iyyer et al. (2018) proposes using paraphrasing for generating adversarial examples. Our model is fundamentally different in two ways: a) it does not need labelled data for generating adversarial examples – the inconsistency loss can be maximised by just making an NLI model produce inconsistent results, and b) it incorporates adversarial examples during the training process, with the aim of training more robust NLI models.
Adversarial examples are also used for assessing the robustness of computer vision models (Szegedy et al., 2014; Goodfellow et al., 2014; Nguyen et al., 2015), where they are created by adding a small amount of noise to the inputs that does not change the semantics of the images, but drastically changes the model predictions.
We trained DAM, ESIM and cBiLSTM on the SNLI corpus using the hyperparameters provided in the respective papers. The results provided by such models on the SNLI and MultiNLI validation and tests sets are provided in Table 3. In the case of MultiNLI, the validation set was obtained by removing 10,000 instances from the training set (originally composed by 392,702 instances), and the test set consists in the matched validation set.
Background Knowledge Violations.
As a first experiment, we count the how likely our model is to violate rules in Table 1.
In Table 4 we report the number sentence pairs in the SNLI training set where DAM, ESIM and cBiLSTM violate . In the column we report the number of times the body of the rule holds, according to the model. In the column we report the number of times where the body of the rule holds, but the head does not – which is clearly a violation of available rules.
We can see that, in the case of rule (reflexivity of entailment), DAM and ESIM make a relatively low number of violations – namely 0.09 and 1.00 %, respectively. However, in the case of cBiLSTM, we can see that, each sentence in the SNLI training set, with a 23.76 % chance, does not entail itself – which violates our background knowledge.
With respect to (symmetry of contradiction), we see that none of the models is completely consistent with the available background knowledge. Given a sentence pair from the SNLI training set, if – according to the model – contradicts , a significant number of times (between 9.84% and 46.17%) the same model also infers that does not contradict . This phenomenon happens 16.70 % of times with DAM, 9.84 % of times with ESIM, and 46.17 % with cBiLSTM: this indicates that all considered models are prone to violating in their predictions, with ESIM being the more robust.
In Section A.2 we report several examples of such violations in the SNLI training set. We select those that maximise the inconsistency loss described in Eq. 4, violating rules and . We can notice that the presence of inconsistencies is often correlated with the length of the sentences. The model tends to detect entailment relationships between longer (i.e., possibly more specific) and shorter (i.e., possibly more general) sentences.
8.1 Generation of Adversarial Examples
In the following, we analyse the automatic generation of sets of adversarial examples that make the model violate the existing background knowledge. We search in the space of sentences by applying perturbations to sampled sentence pairs, using a language model for guiding the search process. The generation procedure is described in Section 4.
|1||A man in uniform is pushing a medical bed.|
|a man is pushing carrying something.|
|1||A dog swims in the water|
|A dog is swimming outside.|
|2||A young man is sledding down a snow covered hill on a green sled.|
|A man is sledding down to meet his daughter.|
|3||A woman sleeps on the ground. A boy and girl play in a pool.|
|Two kids are happily playing in a swimming pool.|
|4||The school is having a special event in order to show the american culture on how other cultures are dealt with in parties.|
|A school dog is hosting an event.|
|A boy is drinking out of a water fountain shaped like a woman.|
|5||A male is getting a drink of water.|
|A male man is getting a drink of water.|
The procedure was especially effective in generating adversarial examples – a sample is shown in Table 6. We can notice that, even though DAM and ESIM achieve results close to human level performance on SNLI, they are likely to fail when faced with linguistic phenomena such as negation, hyponymy, and antonymy. Gururangan et al. (2018) recently showed that NLI datasets tend to suffer from annotation artefacts and limited linguistic variations: this allows NLI models to achieve nearly-human performance by capturing repetitive patterns and idiosyncrasies in a dataset, without being able of effectively capturing textual entailment. This is visible, for instance, in example 5 of Table 6, where the model fails to capture the hyponymy relation between “male” and “man”, incorrectly predicting an entailment in place of a neutral relationship. Furthermore, it is clear that models lack commonsense knowledge, such as the relation between “pushing” and “carrying” (example 1), and being outside and swimming (example 2). Generating such adversarial examples provides us with useful insights on the inner workings of neural NLI models, that can be leveraged for improving the robustness of state-of-the-art models.
8.2 Adversarial Regularisation
We evaluated whether our approach for integrating logical background knowledge via adversarial training (Section 5) is effective at reducing the number of background knowledge violations, without reducing the predictive accuracy of the model. We started with pre-trained DAM, ESIM, and cBiLSTM models, trained using the hyperparameters published in their respective papers.
After training, each model was then fine-tuned for 10 epochs, by minimising the adversarially regularised loss function introduced inEq. 6. Table 3 shows results on the SNLI and MultiNLI development and test set, while Fig. 1 shows the number of violations for different values of , where regularised models are much more likely to make predictions that are consistent with the available background knowledge.
We can see that, despite the drastic reduction of background knowledge violations, the improvement may not be significant, supporting the idea that models achieving close-to-human performance on SNLI and MultiNLI may be capturing annotation artefacts and idiosyncrasies in such datasets (Gururangan et al., 2018).
Evaluation on Adversarial Datasets.
We evaluated the proposed approach on 9 adversarial datasets , with , generated following the procedure described in Section 6 – results are summarised in Table 5. We can see that the proposed adversarial training method significantly increases the accuracy on the adversarial test sets. For instance, consider : prior to regularising (), DAM achieves a very low accuracy on this dataset – i.e. . By increasing the regularisation parameter , we noticed sensible accuracy increases, yielding relative accuracy improvements up to in the case of DAM, and in the case of cBiLSTM.
From Table 5 we can notice that adversarial examples transfer across different models: an unregularised model is likely to perform poorly also on adversarial datasets generated by using different models, with ESIM being the more robust model to adversarially generated examples.
Furthermore, we can see that regularised models are generally more robust to adversarial examples, even when those were generated using different model architectures. For instance we can see that, while cBiLSTM is vulnerable also to adversarial examples generated using DAM and ESIM, its adversarially regularised version cBiLSTM is generally more robust to any sort of adversarial examples.
In this paper, we investigated the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in NLI. We reduced the problem of identifying such adversarial examples to an optimisation problem, by maximising a continuous relaxation of the violation of such constraints, and by using a language model for generating linguistically-plausible examples. Furthermore, we proposed a method for adversarially regularising neural NLI models for incorporating background knowledge.
Our results showed that the proposed method consistently yields significant increases to the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we showed that adversarial examples transfer across model architectures, and the proposed adversarial training procedure produces generally more robust models. The source code and data for reproducing our results is available online, at https://github.com/uclmr/adversarial-nli/.
We are immensely grateful to Jeff Mitchell, Johannes Welbl, Sameer Singh, and the whole UCL Machine Reading group for all useful discussions, inputs, and ideas. This work has been supported by an Allen Distinguished Investigator Award.
- Belinkov and Bisk (2017) Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. CoRR, abs/1711.02173.
- Bengio et al. (2000) Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, pages 932–938. MIT Press.
- van Benthem (2008) Johan van Benthem. 2008. A brief history of natural logic. In M. Chakraborty, B. Löwe, M. Nath Mitra, and S. Sarukki, editors, Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal. College Publications.
- Bos and Markert (2005) Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical inference. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 628–635. The Association for Computational Linguistics.
- Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 632–642. The Association for Computational Linguistics.
- Buck et al. (2017) Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski, and Wei Wang. 2017. Ask the right questions: Active question reformulation with reinforcement learning. CoRR, abs/1705.07830.
- Chen et al. (2017) Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, pages 1657–1668. Association for Computational Linguistics.
- Condoravdi et al. (2003) Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Reinhard Stolle, and Daniel G. Bobrow. 2003. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning, pages 38–45.
- Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, volume 3944 of LNCS, pages 177–190. Springer.
- Fyodorov et al. (2000) Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. 2000. A natural logic inference system. In Proceedings of the of the 2nd Workshop on Inference in Computational Semantics.
- Goodfellow et al. (2014) Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572.
- Gupta and Qi (1991) M. M. Gupta and J. Qi. 1991. Theory of t-norms and fuzzy inference methods. Fuzzy Sets Syst., 40(3):431–450.
- Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. CoRR, abs/1803.02324.
- Guu et al. (2017) Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2017. Generating sentences by editing prototypes. CoRR, abs/1709.08878.
- Hastie et al. (2001) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2001. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
Hosseini et al. (2017)
Hossein Hosseini, Baicen Xiao, and Radha Poovendran. 2017.
Deceiving google’s cloud video intelligence API built for
2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops, pages 1305–1309. IEEE Computer Society.
- Iyyer et al. (2018) Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. CoRR, abs/1804.06059.
- Jia and Liang (2017) Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2011–2021. Association for Computational Linguistics.
- Kannan and Vinyals (2017) Anjuli Kannan and Oriol Vinyals. 2017. Adversarial evaluation of dialogue models. CoRR, abs/1701.08198.
- Katz (1972) J.J. Katz. 1972. Semantic theory. Studies in language. Harper & Row.
- Levesque (2014) Hector J. Levesque. 2014. On our best behaviour. Artif. Intell., 212:27–35.
- MacCartney and Manning (2009) Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the of the Eighth International Conference on Computational Semantics, Tilburg, Netherlands.
- Minervini et al. (2017) Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, and Sebastian Riedel. 2017. Adversarial sets for regularising neural link predictors. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017. AUAI Press.
Nguyen et al. (2015)
Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. 2015.
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images.In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pages 427–436. IEEE Computer Society.
- Paperno et al. (2016) Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. The Association for Computer Linguistics.
- Parikh et al. (2016) Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Su et al. (2016), pages 2249–2255.
- Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Su et al. (2016), pages 2383–2392.
- Rimell and Clark (2009) Laura Rimell and Stephen Clark. 2009. Porting a lexicalized-grammar parser to the biomedical domain. Journal of Biomedical Informatics, 42(5):852–865.
- Rocktäschel et al. (2016) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In International Conference on Learning Representations (ICLR).
- Su et al. (2016) Jian Su et al., editors. 2016. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016. The Association for Computational Linguistics.
- Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
- Williams et al. (2017) Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. CoRR, abs/1704.05426.
Appendix A Supplementary Material
a.1 Accuracy on Adversarial Datasets
In the following, we report the accuracy of DAM on several adversarial datasets , with and . In the following, we report the accuracy of ESIM on several adversarial datasets . In the following, we report the accuracy of cBiLSTM on several adversarial datasets .
a.2 Adversarial examples
In Table 7 we report inconsistent results produced by DAM on the SNLI training set, which violate rules and outlined in Table 1. In Table 8, we report inconsistent results yield by DAM on examples generated using the procedure described in Section 4.3.
|1||A young girl is holding a long thin yellow balloon.|
|There is a girl watching a balloon|
|2||A woman dressed in green is rollerskating outside at an event.|
|A woman dressed in green is not rollerskating|
|3||A young adult male, wearing black pants, a white shirt and a red belt, is practicing martial arts.|
|A guy playing a video game on his flat screen television.|
|4||Man sitting at a computer.|
|The man is not outside running.|
|5||Two young women wearing bikini tops and denim shorts walk along side an orange VW Beetle.|
|Two young women are not wearing coats and jeans|
|6||A woman in a hat sits reading and drinking a coffee.|
|martial arts demonstration|
|Two adults dogs walk across a street.|
|2||A person on skis on a rail at night.|
|They are fantastic sleeping skiiers|
|3||The school is having a special event in order to show the american culture on how other cultures are dealt with in parties.|
|A school dog is hosting an event.|