Adversarial learning has been a popular topic in deep learning. Attackers generate adversarial examples by perturbing the samples and use these examples to fool deep neural networks (DNNs). From the perspective of defense, adversarial examples are mixed into the training set to improve performance and robustness of the victim models.
However, building an attacker for NLP models (such as a text classifier) is extremely challenging. Firstly, it is difficult to perform gradient-based perturbations since the sentence space is discrete. However, gradient information is critical – it leads to the steepest direction to more effective examples. Secondly, adversarial examples are usually not fluent sentences. Unfluent examples are less effective in attacking, as victim models can easily learn to recognize them. Meanwhile, adversarial training on them usually does not perform well (see Figure1 for detailed analysis).
Current methods cannot properly handle the two problems. Ebrahimi et al. (2018) (HotFlip) propose to perturb a sentence by flipping one of the characters, and use the gradient of each perturbation to guide sample selection. But simple character flipping often leads to meaningless words (eg. “mood” to “mooP”). Genetic attack (Alzantot et al., 2018) is a population-based word replacing attacker, which aims to generate fluent sentences by filtering out the unreasonable sentences with a language model. But the fluency of examples generated by genetic attack is still not satisfactory and it is inefficient as the gradient is discarded.
To address the aforementioned problems, we propose the Metropolis-Hastings attack (MHA) algorithm in this short paper. MHA is an adversarial example generator based on Metropolis-Hastings (M-H) sampling (Metropolis et al., 1953; HASTINGS, 1970; Chib and Greenberg, 1995)
. M-H sampling is a classical MCMC sampling approach, which has been applied to many NLP tasks, such as natural language generation(Kumagai et al., 2016), constrained sentence generation (Miao et al., 2018), guided open story generation (Harrison et al., 2017), etc. We propose two variants of MHA, namely a black-box MHA (b-MHA) and a white-box MHA (w-MHA). Specifically, in contrast to previous language generation models using M-H, b-MHA’s stationary distribution is equipped with a language model term and an adversarial attacking term. The two terms make the generation of adversarial examples fluent and effective. w-MHA even incorporates adversarial gradients into proposal distributions to speed up the generation of adversarial examples.
Our contributions include that we propose an efficient approach for generating fluent adversarial examples. Experimental results on IMDB (Maas et al., 2011) and SNLI (Bowman et al., 2015) show that, compared with the state-of-the-art genetic model, MHA generates examples faster, achieving higher success rates with much fewer invocations. Meanwhile, adversarial samples from MHA are not only more fluent but also more effective to improve the adversarial robustness and classification accuracy after adversarial training.
Generally, adversarial attacks aim to mislead the neural models by feeding adversarial examples with perturbations, while adversarial training aims to improve the models by utilizing the perturbed examples. Adversarial examples fool the model into producing erroneous outputs, such as irrelevant answers in QA systems or wrong labels in text classifiers (Figure 2). Training with such examples may enhance performance and robustness.
Definitions of the terms in this paper are as follow. The victim models are word-level classifiers, which take in tokenized sentences and output their labels. The attackers generate sentences by perturbing the original ones, in order to mislead the victim model into making mistakes. Adversarial attacks include two categories: (a) black-box attack only allows the attackers to have access to model outputs, while (b) white-box attack allows full access to the victim model, including model outputs, gradients and (hyper-)parameters. For adversarial training, the same victim model is trained from scratch on an updated training set with adversarial examples included.
3 Proposed Method: MHA
In this section, we first introduce M-H sampling briefly, and then describe how to apply M-H sampling efficiently to generate adversarial examples for natural language.
3.1 Metropolis-Hastings Sampling
The M-H algorithm is a classical Markov chain Monte Carlo sampling approach. Given the stationary distribution () and transition proposal, M-H is able to generate desirable examples from . Specifically, at each iteration, a proposal to jump from to is made based on the proposal distribution (
). The proposal is accepted with a probability given by the acceptance rate:
Once accepted, the algorithm jumps to . Otherwise, it stays at .
3.2 Black-Box Attack
In black-box attack (b-MHA), we expect the examples to meet three requirements: (a) to read fluently; (b) to be able to fool the classifier; (c) to invoke the classifier for as few times as possible.
Stationary distribution. To meet these requirements, the stationary distribution is designed as:
where is the probability of the sentence () given by a pre-trained language model (LM) and is the probability of an erroneous label () given by the victim model. guarantees fluency, while is the attack target.
Transition proposal. There are three word-level transition operations – replacement, insertion and deletion. Traversal indexing is applied to select words on which operations are performed. Suppose MHA selects the -th word () on the -th proposal, then on the -th proposal, the selected word () is:
The transition function for replacement is as Equation 3, where is the selected word to be replaced, and is a pre-selected candidate set, which will be explained later. The insertion operation () consists of two steps – inserting a random word into the position and then performing replacement upon it. The deletion operation is rather simple. if , where is the sentence after deleting the -th word (), or otherwise.
The proposal distribution is a weighted sum of the transition functions:
where , and are pre-defined probabilities of the operations.
The pre-selector generates a candidate set for and . It chooses the most possible words according to the score () to form the candidate word set . is formulated as:
where is the prefix of the sentence, is the suffix of the sentence, and is a pre-trained backward language model. Without pre-selection, will include all words in the vocabulary, and the classifier will be invoked repeatedly to compute the denominator of Equation 3, which is inefficient.
3.3 White-Box Attack
The only difference between white-box attack (w-MHA) and b-MHA lies in the pre-selector.
Pre-selection. In w-MHA, the gradient is introduced into the pre-selection score (). is formulated as:
is the cosine similarity function,
is the loss function on the target label,and are the embeddings of the current word () and the substitute (). The gradient () leads to the steepest direction, and is the actual changing direction if is replaced by . The cosine similarity term () guides the samples to jumping along the direction of the gradient, which raises and , and eventually makes w-MHA more efficient.
Note that insertion and deletion are excluded in w-MHA, because it is difficult to compute their gradients. Take the insertion operation for instance. One may apply a similar technique in b-MHA, by first inserting a random word forming intermediate sentence and then performing replacement operation upon . Computing is easy, but it is not the actual gradient. Computing of the actual gradient () is hard, since the change from to is discrete and non-differential.
Datasets. Following previous works, we validate the performance of proposed MHA on IMDB and SNLI datesets. The IMDB dataset includes 25,000 training samples and 25,000 test samples of movie reviews, tagged with sentimental labels (positive or negative). The SNLI dataset contains 55,000 training samples, 10,000 validation samples and 10,000 test samples. Each sample contains a premise, a hypothesis and an inference label (entailment, contradiction or neutral). We adopt a single layer bi-LSTM and the BiDAF model (Seo et al., 2016) (which employs bidirectional attention flow mechanism to capture relationships between sentence pairs) as the victim models on IMDB and SNLI, respectively.
Baseline Genetic Attacker. We take the state-of-the-art genetic attack model (Alzantot et al., 2018) as our baseline, which uses a gradient-free population-based algorithm. Intuitively, it maintains a population of sentences, and perturbs them by word-level replacement according to the embedding distances without considering the victim model. Then, the intermediate sentences are filtered by the victim classifier and a language model, which leads to the next generation.
Hyper-parameters. As in the work of miao2018cgmh, MHA is limited to make proposals for at most 200 times, and we pre-select 30 candidates at each iteration. Constraints are included in MHA to forbid any operations on sentimental words (eg. “great”) or negation words (eg. “not”) in IMDB experiments with SentiWordNet (Esuli and Sebastiani, 2006; Baccianella et al., 2010). All LSTMs in the victim models have 128 units. The victim model reaches 83.1% and 81.1% test accuracies on IMDB and SNLI, which are acceptable results. More detailed hyper-parameter settings are included in the appendix.
|Premise: three men are sitting on a beach dressed in orange with refuse carts in front of them.|
|Hypothesis: empty trash cans are sitting on a beach.|
|Genetic: empties trash cans are sitting on a beach.|
|b-MHA: the trash cans are sitting in a beach.|
|w-MHA: the trash cans are sitting on a beach.|
|Premise: a man is holding a microphone in front of his mouth.|
|Hypothesis: a male has a device near his mouth.|
|Genetic: a masculine has a device near his mouth.|
|b-MHA: a man has a device near his car.|
|w-MHA: a man has a device near his home.|
4.1 Adversarial Attack
To validate the attacking efficiency, we randomly sample 1000 and 500 correctly classified examples from the IMDB and SNLI test sets, respectively. Attacking success rate and invocation times (of the victim model) are employed for testing efficiency. As shown in Figure 3, curves of our proposed MHA are above the genetic baseline, which indicates the efficiency of MHA. By incorporating gradient information in proposal distribution, w-MHA even performs better than b-MHA, as the curves rise fast. Note that the ladder-shaped curves of the genetic approach is caused by its population-based nature.
We list detailed results in Table 1. Success rates are obtained by invoking the victim model for at most 6,000 times. As shown, the gaps of success rates between the models are not very large, because all models can give pretty high success rate. However, as expected, our proposed MHA provides lower perplexity (PPL) 111We use the open released GPT2 (Radford et al., ) model for PPL evaluation., which means the examples generated by MHA are more likely to appear in the corpus of the evaluation language model. As the corpus is large enough and the language model for evaluation is strong enough, it indicates the examples generated by MHA are more likely to appear in natural language space. It eventually leads to better fluency.
Human evaluations are also performed. From the examples that all three approaches successfully attacked, we sample 40 examples on IMDB. Three volunteers are asked to label the generated examples. Examples with false labels from the victim classifier and with true labels from the volunteers are regarded as actual adversarial examples. The adversarial example ratios of the genetic approach, b-MHA and w-MHA are 98.3%, 99.2% and 96.7%, respectively, indicating that almost all generated examples are adversarial examples. Volunteers are also asked to rank the generated examples by fluency on SNLI (“1” indicating the most fluent while “3” indicating the least fluent). 20 examples are sampled in the same manners mentioned above. The mean values of ranking of the genetic approach, b-MHA and w-MHA are 1.93, 1.80 and 2.03, indicating that b-MHA generates the most fluent samples. Samples generated by w-MHA are less fluent than the genetic approach. It is possibly because the gradient introduced into the pre-selector could influence the fluency of the sentence, from the perspective of human beings.
Adversarial examples from different models on SNLI are shown in Table 2. The genetic approach may replace verbs with different tense or may replace nouns with different plurality, which can cause grammatical mistakes (eg. Case 1), while MHA employs the language model to formulate the stationary distribution in order to avoid such grammatical mistakes. MHA does not have constraints that word replacement should have similar meanings. MHA may replace entities or verbs with some irrelevant words, leading to meaning changes of the original sentence (eg. Case 2). More cases are included in the appendix.
|Model||Attack succ (%)|
|+ Genetic adv training||93.8||99.6||100.0|
|+ b-MHA adv training||93.0||95.7||99.7|
|+ w-MHA adv training||92.4||97.5||100.0|
|Train # = 10K||30K||100K|
|+ Genetic adv training||58.8||66.1||73.6|
|+ w-MHA adv training||60.0||66.9||73.5|
4.2 Adversarial Training
In order to validate whether adversarial training is helpful for improving the adversarial robustness or classification accuracy of the victim model, a new model is trained from scratch after mixing the generated examples into the training set.
To test the adversarial robustness, we attack the new models with all methods on IMDB. As shown in Table 3, the new model after genetic adversarial training can not defend MHA. On the contrary, adversarial training with b-MHA or w-MHA decreases the success rate of genetic attack. It shows that the adversarial examples from MHA could be more effective than unfluent ones from genetic attack, as assumed in Figure 1.
To test whether the new models could achieve accuracy gains after adversarial training, experiments are carried out on different sizes of training data, which are subsets of SNLI’s training set. The number of adversarial examples is fixed to 250 during experiment. The classification accuracies of the new models after the adversarial training by different approaches are listed in Table 4. Adversarial training with w-MHA significantly improves the accuracy on all three settings (with p-values less than 0.02). w-MHA outperforms the genetic baseline with 10K and 30K training data, and gets comparable improvements with 100K training data. Less training data leads to larger accuracy gains, and MHA performs significantly better than the genetic approach on smaller training set.
5 Future Works
Current MHA returns the examples when the label is changed, which may lead to incomplete sentences, which are unfluent from the perspective of human beings. Constraints such as forcing the model to generate EOS at the end of the sentence before returning may address this issue.
Also, entity and verb replacements without limitations have negative influence on adversarial example generations for tasks such as NLI. Limitations of similarity during word operations are essential to settle the problem. Constraints such as limitation of the embedding distance may help out. Another solution is introducing the inverse of embedding distance in the pre-selection source.
In this paper, we propose MHA, which generates adversarial examples for natural language by adopting the MH sampling approach. Experimental results show that our proposed MHA could generate adversarial examples faster than the genetic baseline. Obtained adversarial examples from MHA are more fluent and may be more effective for adversarial training.
We would like to thank Lili Mou for his constructive suggestions. We also would like to thank the anonymous reviewers for their insightful comments.
- Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2890–2896. Cited by: §1, §4.
Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining.. In Lrec, Vol. 10, pp. 2200–2204. Cited by: §4.
- A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Cited by: §1.
- Understanding the metropolis-hastings algorithm. The american statistician 49 (4), pp. 327–335. Cited by: §1.
- HotFlip: white-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vol. 2, pp. 31–36. Cited by: §1.
- Sentiwordnet: a publicly available lexical resource for opinion mining.. In LREC, Vol. 6, pp. 417–422. Cited by: §4.
Toward automated story generation with markov chain monte carlo methods and deep neural networks.
Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference, Cited by: §1.
- Monte carlo sampling methods using markov chains and their applications. Biometrika 57 (1), pp. 97–109. Cited by: §1.
- Human-like natural language generation using monte carlo tree search. In Proceedings of the INLG 2016 Workshop on Computational Creativity in Natural Language Generation, pp. 11–18. Cited by: §1.
Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pp. 142–150. Cited by: §1.
- Equation of state calculations by fast computing machines. The journal of chemical physics 21 (6), pp. 1087–1092. Cited by: §1.
- CGMH: constrained sentence generation by metropolis-hastings sampling. arXiv preprint arXiv:1811.10996. Cited by: §1.
-  Language models are unsupervised multitask learners. Cited by: footnote 1.
- Bidirectional attention flow for machine comprehension. ICLR’17; arXiv preprint arXiv:1611.01603. Cited by: §4.