Soft Contextual Data Augmentation for Neural Machine Translation

05/25/2019 ∙ by Jinhua Zhu, et al. ∙ Microsoft Institute of Computing Technology, Chinese Academy of Sciences SUN YAT-SEN UNIVERSITY 0

While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is still very limited. In this paper, we present a novel data augmentation method for neural machine translation. Different from previous augmentation methods that randomly drop, swap or replace words with other words in a sentence, we softly augment a randomly chosen word in a sentence by its contextual mixture of multiple related words. More accurately, we replace the one-hot representation of a word by a distribution (provided by a language model) over the vocabulary, i.e., replacing the embedding of this word by a weighted combination of multiple semantically similar words. Since the weights of those words depend on the contextual information of the word to be replaced, the newly generated sentences capture much richer information than previous augmentation methods. Experimental results on both small scale and large scale machine translation datasets demonstrate the superiority of our method over strong baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data augmentation is an important trick to boost the accuracy of deep learning methods by generating additional training samples. These methods have been widely used in many areas. For example, in computer vision, the training data are augmented by transformations like random rotation, resizing, mirroring and cropping Krizhevsky et al. (2012); Cubuk et al. (2018).

While similar random transformations have also been explored in natural language processing (NLP) tasks

Xie et al. (2017), data augmentation is still not a common practice in neural machine translation (NMT). For a sentence, existing methods include randomly swapping two words, dropping word, replacing word with another one and so on. However, due to text characteristics, these random transformations often result in significant changes in semantics.

A recent new method is contextual augmentation Kobayashi (2018); Wu et al. (2018), which replaces words with other words that are predicted using language model at the corresponding word position. While such method can keep semantics based on contextual information, this kind of augmentation still has one limitation: to generate new samples with adequate variation, it needs to sample multiple times. For example, given a sentence in which words are going to be replaced with other words predicted by one language model, there could be as many as exponential candidates. Given that the vocabulary size is usually large in languages, it is almost impossible to leverage all the possible candidates for achieving good performance.

In this work, we propose soft contextual data augmentation, a simple yet effective data augmentation approach for NMT. Different from the previous methods that randomly replace one word to another, we propose to augment NMT training data by replacing a randomly chosen word in a sentence with a soft word

, which is a probabilistic distribution over the vocabulary. Such a distributional representation can capture a mixture of multiple candidate words with adequate variations in augmented data. To ensure the distribution reserving similar semantics with original word, we calculate it based on the contextual information by using a language model, which is pretrained on the training corpus.

To verify the effectiveness of our method, we conduct experiments on four machine translation tasks, including IWSLT German to English, Spanish to English, Hebrew to English and WMT English to German translation tasks. In all tasks, the experimental results show that our method can obtain remarkable BLEU score improvement over the strong baselines.

2 Related Work

We introduce several related works about data augmentation for NMT.

Artetxe et al. (2017) and Lample et al. (2017) randomly shuffle (swap) the words in a sentence, with constraint that the words will not be shuffled further than a fixed small window size. Iyyer et al. (2015) and Lample et al. (2017)

randomly drop some words in the source sentence for learning an autoencoder to help train the unsupervised NMT model. In

Xie et al. (2017), they replace the word with a placeholder token or a word sampled from the frequency distribution of vocabulary, showing that data noising is an effective regularizer for NMT. Fadaee et al. (2017) propose to replace a common word by low-frequency word in the target sentence, and change its corresponding word in the source sentence to improve translation quality of rare words. Most recently, Kobayashi (2018) propose an approach to use the prior knowledge from a bi-directional language model to replace a word token in the sentence. Our work differs from their work that we use a soft distribution to replace the word representation instead of a word token.

3 Method

In this section, we present our method in details.

3.1 Background and Motivations

Given a source and target sentence pair where and

, a neural machine translation system models the conditional probability

. NMT systems are usually based on an encoder-decoder framework with an attention mechanism Sutskever et al. (2014); Bahdanau et al. (2014). In general, the encoder first transforms the input sentence with words/tokens into a sequence of hidden states , and then the decoder takes the hidden states from the encoder as input to predict the conditional distribution of each target word/token

given the previous ground truth target word/tokens. Similar to the NMT decoder, a language model is intended to predict the next word distribution given preceding words, but without another sentence as a conditional input. In NMT, as well as other NLP tasks, each word is assigned with a unique ID, and thus represented as an one-hot vector. For example, the

-th word in the vocabulary (with size ) is represented as a -dimensional vector , whose -th dimension is and all the other dimensions are .

Existing augmentation methods generate new training samples by replacing one word in the original sentences with another word Wang et al. (2018); Kobayashi (2018); Xie et al. (2017); Fadaee et al. (2017). However, due to the sparse nature of words, it is almost impossible for those methods to leverage all possible augmented data. First, given that the vocabulary is usually large, one word usually has multiple semantically related words as replacement candidates. Second, for a sentence, one needs to replace multiple words instead of a single word, making the number of possible sentences after augmentation increases exponentially. Therefore, these methods often need to augment one sentence multiple times and each time replace a different subset of words in the original sentence with different candidate words in the vocabulary; even doing so they still cannot guarantee adequate variations of augmented sentences. This motivates us to augment training data in a soft way.

3.2 Soft Contextual Data Augmentation

Inspired by the above intuition, we propose to augment NMT training data by replacing a randomly chosen word in a sentence with a soft word. Different from the discrete nature of words and their one-hot representations in NLP tasks, we define a soft word as a distribution over the vocabulary of words. That is, for any word , its soft version is , where and .

Since is a distribution over the vocabulary, one can sample a word with respect to this distribution to replace the original word , as done in Kobayashi (2018). Different from this method, we directly use this distribution vector to replace a randomly chosen word from the original sentence. Suppose is the embedding matrix of all the words. The embedding of the soft word is

(1)

which is the expectation of word embeddings over the distribution defined by the soft word.

The distribution vector of a word can be calculated in multiple ways. In this work, we leverage a pretrained language model to compute and condition on all the words preceding . That is, for the -th word in a sentence, we have

where denotes the probability of the -th word in the vocabulary appearing after the sequence . Note that the language model is pretrained using the same training corpus of the NMT model. Thus the distribution calculated by the language model can be regarded as a smooth approximation of the original one-hot representation, which is very different from previous augmentation methods such as random swapping or replacement. Although this distributional vector is noisy, the noise is aligned with the training corpus.

Figure 1 shows the architecture of the combination of the encoder of the NMT model and the language model. The decoder of the NMT model is similarly combined with the language model. In experiments, we randomly choose a word in the training data with probability

and replace it by its soft version (probability distribution).

Figure 1: The overall architecture of our soft contextual data augmentation approach in encoder side for source sentences. The decoder side for target sentences is similar.

At last, it is worth pointing out that no additional monolingual data is used in our method. This is different from previous techniques, such as back translation, that rely on monolingual data Sennrich et al. (2015a); Gulcehre et al. (2015); Cheng et al. (2016); He et al. (2016); Hoang et al. (2018). We leave the exploration of leveraging monolingual data to future work.

4 Experiment

IWSLT WMT
De En Es En He En En De
Base 34.79 41.58 33.64 28.40
+Swap 34.70 41.60 34.25 28.13
+Dropout 35.13 41.62 34.29 28.29
+Blank 35.37 42.28 34.37 28.89
+Smooth 35.45 41.69 34.61 28.97
+ 35.40 42.09 34.31 28.73
Ours 35.78 42.61 34.91 29.70
Table 1: BLEU scores on four translation tasks.

In this section, we demonstrate the effectiveness of our method on four translation datasets with different scale. The translation quality is evaluated by case-sensitive BLEU score. We compare our approach with following baselines:

  • Base: The original training strategy without any data augmentation;

  • Swap: Randomly swap words in nearby positions within a window size (Artetxe et al., 2017; Lample et al., 2017);

  • Dropout: Randomly drop word tokens (Iyyer et al., 2015; Lample et al., 2017);

  • Blank: Randomly replace word tokens with a placeholder token (Xie et al., 2017);

  • Smooth: Randomly replace word tokens with a sample from the unigram frequency distribution over the vocabulary (Xie et al., 2017);

  • : Randomly replace word tokens sampled from the output distribution of one language model (Kobayashi, 2018).

All above introduced methods except Swap incorporate a hyper-parameter, the probability of each word token to be replaced in training phase. We set with different values in , and report the best result for each method. As for swap, we use 3 as window size following Lample et al. (2017).

For our proposed method, we train two language models for each translation task. One for source language, and the other one for target language. The training data for the language models is the corresponding source/target data from the bilingual translation dataset.

4.1 Datasets

We conduct experiments on IWSLT {German, Spanish, Hebrew} to English ({De, Es, He}En) and WMT English to German (EnDe) translation tasks to verify our approach. We follow the same setup in Gehring et al. (2017) for IWSLT DeEn task. The training data and validation data consist of and sentence pairs. tst2010, tst2011, tst2012, dev2010 and dev2012 are concatenated as our test data. For EsEn and HeEn tasks, there are and parallel sentence pairs in each training set, and we use tst2013 as the validation set, tst2014 as the test set. For all IWSLT translation tasks, we use a joint source and target vocabulary with byte-pair-encoding (BPE) (Sennrich et al., 2015b) types. For WMT EnDe translation, again, we follow Gehring et al. (2017) to filter out sentence pairs for training. We concatenate newstest2012 and newstest2013 as the validation set and use newstest2014 as test set. The vocabulary is built upon the BPE with sub-word types.

4.2 Model Architecture and Optimization

We adopt the sate-of-the-art Transformer architecture (Vaswani et al., 2017) for language models and NMT models in our experiments. For IWSLT tasks, we take the configuration, except a) the dimension of the inner MLP layer is set as instead of and b) the number of attention heads is rather than . As for the WMT EnDe task, we use the default configuration for the NMT model, but the language model is configured with setting in order to speed up the training procedure. All models are trained by Adam (Kingma and Ba, 2014) optimizer with default learning rate schedule as Vaswani et al. (2017). Note that after training the language models, the parameters of the language models are fixed while we train the NMT models.

4.3 Main Results

The evaluation results on four translation tasks are presented in Table 1. As we can see, our method can consistently achieve more than BLEU score improvement over the strong Transformer base system for all tasks. Compared with other augmentation methods, we can find that 1) our method achieves the best results on all the translation tasks and 2) unlike other methods that may not be powerful in all tasks, our method universally works well regardless of the dataset. Specially, on the large scale WMT 2014 EnDe dataset, although this dataset already contains a large amount of parallel training sentence pairs, our method can still outperform the strong base system by BLEU point and achieve BLEU score. These results clearly demonstrate the effectiveness of our approach.

4.4 Study

Figure 2: BLEU scores of each method on IWSLT DeEn dataset with different replacing probability.

As mentioned in Section 4, we set different probability value of to see the effect of our approach and other methods in this subsection. Figure 2 shows the BLEU scores on IWSLT DeEn dataset of each method, from which we can see that our method can observe a consistent BLEU improvement within a large probability range and obtain a strongest performance when . However, other methods are easy to lead to performance drop over the baseline if , and the improvement is also limited for other settings of . This can again prove the superior performance of our method.

5 Conclusions and Future Work

In this work, we have presented soft contextual data augmentation for NMT, which replaces a randomly chosen word with a soft distributional representation. The representation is a probabilistic distribution over vocabulary and can be calculated based on the contextual information of the sentence. Results on four machine translation tasks have verified the effectiveness of our method.

In the future, besides focusing on the parallel bilingual corpus for the NMT training in this work, we are interested in exploring the application of our method on the monolingual data. In addition, we also plan to study our approach in other natural language tasks, such as text summarization.

References