Unsupervised Data Augmentation

04/29/2019 ∙ by Qizhe Xie, et al. ∙ Google Carnegie Mellon University 18

Despite its success, deep learning still needs large labeled datasets to succeed. Data augmentation has shown much promise in alleviating the need for more labeled data, but it so far has mostly been applied in supervised settings and achieved limited gains. In this work, we propose to apply data augmentation to unlabeled data in a semi-supervised learning setting. Our method, named Unsupervised Data Augmentation or UDA, encourages the model predictions to be consistent between an unlabeled example and an augmented unlabeled example. Unlike previous methods that use random noise such as Gaussian noise or dropout noise, UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods. This small twist leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small. For example, on the IMDb text classification dataset, with only 20 labeled examples, UDA outperforms the state-of-the-art model trained on 25,000 labeled examples. On standard semi-supervised learning benchmarks, CIFAR-10 with 4,000 examples and SVHN with 1,000 examples, UDA outperforms all previous approaches and reduces more than 30% of the error rates of state-of-the-art methods: going from 7.66 5.27 that have a lot of labeled data. For example, on ImageNet, with 1.3M extra unlabeled data, UDA improves the top-1/top-5 accuracy from 78.28/94.36 79.04/94.45

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning typically requires a lot of labeled data to succeed. Labeling data, however, is a costly process for each new task of interest. Making use of unlabeled data to improve deep learning has been an important research direction to address this costly process. On this direction, semi-supervised learning (Chapelle et al., 2009) is one of the most promising methods and recent works can be grouped into three categories: (1) graph-based label propagation via graph convolution (Kipf and Welling, 2016) and graph embeddings (Weston et al., 2012), (2) modeling prediction target as latent variables (Kingma et al., 2014), and (3) consistency / smoothness enforcing (Bachman et al., 2014; Laine and Aila, 2016; Miyato et al., 2018; Clark et al., 2018; Tarvainen and Valpola, 2017). Among them, methods of the last category, i.e., based on smoothness enforcing, have been shown to work well on many tasks.

In a nutshell, the smoothness enforcing methods simply regularize the model’s prediction to be less sensitive to small perturbations applied to examples (labeled or unlabeled). Given an observed example, smoothness enforcing methods first create a perturbed version of it (e.g., typically by adding artificial noise such as Gaussian noise or dropout), and enforce the model predictions on the two examples to be similar. Intuitively, a good model should be invariant to any small perturbations that do not change the nature of an example. Under this generic framework, methods in this category differ mostly in the perturbation function, i.e., how the perturbed example is created.

In our paper, we propose to use state-of-the-art data augmentation methods found in supervised learning as the perturbation function in the smoothness enforcing framework, extending prior works by Sajjadi et al. (2016); Laine and Aila (2016). We show that better augmentation methods lead to greater improvements and that they can be used on many other domains. Our method, named Unsupervised Data Augmentation or UDA, minimizes the KL divergence between model predictions on the original example and an example generated by data augmentation. Although data augmentation has been studied extensively and has led to significant improvements, it has mostly been applied in supervised learning settings (Simard et al., 1998; Krizhevsky et al., 2012; Cubuk et al., 2018; Yu et al., 2018). UDA, on the other hand, can directly apply state-of-the-art data augmentation methods on unsupervised data which is available at larger quantities and therefore has the potential to work much better than standard supervised data augmentation.

We evaluate UDA on a wide variety of language and vision tasks. On six text classification tasks, our method achieves significant improvements over state-of-the-art models. Notably, on IMDb, UDA with 20 labeled examples outperforms the state-of-the-art model trained on 1250x more labeled data. We also evaluate UDA on standard semi-supervised learning benchmarks CIFAR-10 and SVHN. UDA outperforms all existing semi-supervised learning methods by significant margins. On CIFAR-10 with 4,000 labeled examples, UDA achieves an error rate of 5.27, nearly matching the performance of the fully supervised model that uses 50,000 labeled examples. Furthermore, with a more advanced architecture, PyramidNet+ShakeDrop, UDA achieves a new state-of-the-art error rate of 2.7. On SVHN, UDA achieves an error rate of 2.85 with only 250 labeled examples, nearly matching the performance of the fully supervised model trained with 73,257 labeled examples. Finally, we also find UDA to be beneficial when there is a large amount of supervised data. Specifically, on ImageNet, UDA leads to improvements of top-1 accuracy and top-5 accuracy from to with of the labeled set and from to when we use the full labeled set and an external dataset with M unlabeled examples.

Our contributions, which will be presented in the rest of the paper, are as follows:

  • [leftmargin=*,itemsep=0em,topsep=0em]

  • First, we propose a training technique called TSA that effectively prevents overfitting when much more unlabeled data is available than labeled data.

  • Second, we show that targeted data augmentation methods (such as AutoAugment (Cubuk et al., 2018)) give a significant improvements over other untargeted augmentations.

  • Third, we combine a set of data augmentations for NLPs, and show that our method works well and complements representation learning methods, such as BERT (Devlin et al., 2018).

  • Fourth, our paper show significant leaps in performance compared to previous methods in a range of vision and language tasks.

  • Finally, we develop a method so that UDA can be applied even the class distributions of labeled and unlabeled data mismatch.

2 Unsupervised Data Augmentation (UDA)

In this section, we first formulate our task and then present the proposed method, UDA. Throughout this paper, we focus on classification problems and will use to denote the input and or simply to denote its ground-truth prediction target. We are interested in learning a model to predict based on the input , where denotes the model parameters. Finally, we will use and to denote the sets of labeled and unlabeled examples respectively.

2.1 Background: Supervised Data Augmentation

Data augmentation aims at creating novel and realistic-looking training data by applying a transformation to an example, without changing its label. Formally, let be the augmentation transformation from which one can draw augmented examples based on an original example . For an augmentation transformation to be valid, it is required that any example drawn from the distribution shares the same ground-truth label as , i.e., . Given a valid augmentation transformation, we can simply minimize the negative log-likelihood on augmented examples.

Supervised data augmentation can be equivalently seen as constructing an augmented labeled set from the original supervised set and then training the model on the augmented set. Therefore, the augmented set needs to provide additional inductive biases to be more effective. How to design the augmentation transformation has, thus, become critical.

In recent years, there have been significant advancements on the design of data augmentations for NLP (Yu et al., 2018), vision (Krizhevsky et al., 2012; Cubuk et al., 2018) and speech (Hannun et al., 2014; Park et al., 2019) in supervised settings. Despite the promising results, data augmentation is mostly regarded as the “cherry on the cake” which provides a steady but limited performance boost because these augmentations has so far only been applied to a set of labeled examples which is usually of a small size. Motivated by this limitation, we develop UDA to apply effective data augmentations to unlabeled data, which is often in larger quantities.

2.2 Unsupervised Data Augmentation

As discussed in the introduction, a recent line of work in semi-supervised learning has been utilizing unlabeled examples to enforce smoothness of the model. The general form of these works can be summarized as follows:

  • [leftmargin=*,itemsep=0em,topsep=0em]

  • Given an input , compute the output distribution given and a perturbed version by injecting a small noise . The noise can be applied to or hidden states or be used to change the computation process.

  • Minimize a divergence metric between the two predicted distributions .

This procedure enforces the model to be insensitive to the perturbation and hence smoother with respect to changes in the input (or hidden) space.

In this work, we present a simple twist to the existing smoothness / consistency enforcing works and extend prior works on using data augmentation as perturbations (Sajjadi et al., 2016; Laine and Aila, 2016). We propose to use state-of-the-art data augmentation targeted at different tasks as a particular form of perturbation and optimize the same smoothness or consistency enforcing objective on unlabeled examples. Specifically, following VAT (Miyato et al., 2018), we choose to minimize the KL divergence between the predicted distributions on an unlabeled example and an augmented unlabeled example:

(1)

where is a data augmentation transformation and is a fixed copy of the current parameters indicating that the gradient is not propagated through , as suggested by Miyato et al. (2018). The data augmentation transformation used here is the same as the augmentations used in the supervised data augmentation such as back translation for texts and random cropping for images. Since it is costly to run back translation on-the-fly during training, we generate augmented examples offline. Multiple augmented examples are generated for each unlabeled example.

To use both labeled examples and unlabeled examples, we add the cross entropy loss on labeled examples and the consistency / smoothness objective defined in Equation 1 with a weighting factor as our training objective, which is illustrated in Figure 1. Formally, the objective is defined as follows:

(2)
Figure 1: Training objective for UDA, where M is a model that predicts a distribution of given .

By minimizing the consistency loss, UDA allows for label information to propagate from labeled examples to unlabeled ones. We set to for most of our experiments and use different batch sizes for the supervised objective and the unsupervised objective. We found that it leads to better performances to use a larger batch size for the unsupervised objective on some datasets.

When compared to conventional perturbations such as Gaussian noise, dropout noise and simple augmentations such as affine transformations, we believe that data augmentations targeted at each task can serve as a more effective source of “noise”. Specifically, using targeted data augmentation as the perturbation function has several advantages:

  • [leftmargin=*,itemsep=0em,topsep=0em]

  • Valid perturbations: Data augmentation methods that achieve great performance in supervised learning have the advantage of generating realistic augmented examples that share the same ground-truth labels with the original example. Hence, it is safe to encourage the smoothness or consistency between predictions on the original unlabeled example and the augmented unlabeled examples.

  • Diverse perturbations: Data augmentation can generate a diverse set of examples since it can make large modifications to the input example without changing its label, while the perturbations such as Gaussian or Bernoulli noise only make local changes. Encouraging smoothness on a diverse set of augmented examples can significantly improve the sample efficiency.

  • Targeted inductive biases: Different tasks require different inductive biases. As shown in AutoAugment (Cubuk et al., 2018), data augmentation policy can be directly optimized towards improving validation performance on each task. Such performance-oriented augmentation policy can learn to figure out the missing or most wanted inductive biases in an original labeled set. We find that the augmentation policies found by AutoAugment work well in our semi-supervised learning setting although AutoAugment optimizes model’s performance in a supervised learning setting.

As we will show in the ablation study, diverse and valid augmentations that inject targeted inductive biases are key components that lead to significant performance improvements.

2.3 Augmentation Strategies for Different Tasks

As discussed in Section 2.2, data augmentation can be tailored to provide missing inductive biases specific to each task. In this section, we discuss three different augmentations used for different tasks and discuss the trade-off between diversity and validity for data augmentations. We leverage recent advancements on data augmentation and apply the following augmentation strategies:

AutoAugment for Image Classification.

For image classification, AutoAugment (Cubuk et al., 2018)

uses reinforcement learning to search for an “optimal” combination of image augmentation operations directly based on the validation performances, outperforming any manually designed augmentation procedure by a clear margin. We use the augmentation policies found, and opensourced by AutoAugment for experiments on CIFAR-10, SVHN and ImageNet.

222https://github.com/tensorflow/models/tree/master/research/autoaugment We also use Cutout (DeVries and Taylor, 2017) for CIFAR-10 and SVHN since it can be composed with AutoAugment to achieve improved performance.

Back translation for Text Classification.

Back translation (Sennrich et al., 2015; Edunov et al., 2018) can generate diverse paraphrases while preserving the semantics of the original sentences and has been shown to lead to significant performance improvements for QANet in question answering (Yu et al., 2018). Hence, we employ a back translation system to paraphrase the training data, for sentiment classification datasets including IMDb, Yelp-2, Yelp-5, Amazon-2 and Amazon-5. We find that the diversity of the paraphrases is more important than the quality or the validity. Hence we employ random sampling with a tunable temperature instead of beam search for the generation. More specifically, we train English-to-French and French-to-English translation models using the WMT 14 corpus and perform back translation to each sentence instead of the whole paragraph, since the parallel data in WMT 14 is for sentence-level translation while the input examples in sentiment classification corpora are paragraphs. As shown in Figure 2, the paraphrases generated by back translation sentence are very diverse and have similar semantic meanings.

Figure 2: Augmented examples using back translation and AutoAugment

TF-IDF based word replacing for Text Classification.

While back translation is good at maintaining a global semantics of the original sentence, there is no guarantee that it will keep certain words. However, on DBPedia where the task is to predict the category of a Wikipedia page, some keywords are more informative than other words in determining the category. Therefore, we propose an augmentation method called TF-IDF based word replacing that replaces uninformative words that usually have low TF-IDF scores while keeping keywords which have high TF-IDF scores. We refer readers to Appendix B for a detailed description.

2.4 Trade-off Between Diversity and Validity for Data Augmentation

Despite that state-of-the-art data augmentation methods can generate diverse and valid augmented examples as discussed in section 2.2, there is a trade-off between diversity and validity since diversity is achieved by changing a part of the original example, naturally leading to the risk of altering the ground-truth label. We find it beneficial to tune the trade-off between diversity and validity for data augmentation methods.

For image classification, AutoAugment automatically finds the sweet spot between diversity and validity since it is optimized according to the validation set performances in the supervised setting. For text classification, we tune the temperature of random sampling. On the one hand, when we use a temperature of , decoding by random sampling degenerates into greedy decoding and generates perfectly valid but identical paraphrases. On the other hand, when we use a temperature of , random sampling generates very diverse but barely readable paraphrases. We find that setting the Softmax temperature to or leads to the best performances.

3 Additional Training Techniques

In this section, we introduce additional techniques for applying UDA in different scenarios. First, to allow the model to be trained on more unlabeled data without overfitting, we introduce a technique called Training Signal Annealing in Section 3.1. Then, to make the training signal stronger when the predictions are over-flat, we present three intuitive methods to sharpen the predictions in Section 3.2. Lastly, to apply UDA on out-of-domain unlabeled data, we introduce a simple method called Domain-relevance Data Filtering in Section 3.3.

3.1 Training Signal Annealing

Since it is much easier to obtain unlabeled data than labeled data, in practice, we often encounter a situation where there is a large gap between the amount of unlabeled data and that of labeled data. To enable UDA to take advantage of as much unlabeled data as possible, we usually need a large enough model, but a large model can easily overfit the supervised data of a limited size. To tackle this difficulty, we introduce a new training technique called Training Signal Annealing (TSA).

The main intuition behind TSA is to gradually release the training signals of the labeled examples without overfitting them as the model is trained on more and more unlabeled examples. Specifically, for each training step , we set a threshold , with

being the number of categories. When the probability of the correct category

of a labeled example is higher than the threshold

, we remove this example from the loss function and only train on other labeled examples in the minibatch. Formally, given a minibatch of labeled examples

, we replace the supervised objective with the following objective:

where is the indicator function and is simply a re-normalization factor. Effectively, the threshold serves as a ceiling to prevent the model from over-training on examples that the model is already confident about. When we gradually anneal from to during training, the model can only slowly receive supervisions from the labeled examples, largely alleviating the overfitting problem. Suppose is the total number of training steps and is the current training step. To account for different ratios of unlabeled data and labeled data, we consider three particular schedules of , as shown in Figure 3.

  • [leftmargin=*,itemsep=0em,topsep=0em]

  • log-schedule — The threshold is increased most rapidly at the beginning of the training: ;

  • linear-schedule — The threshold is increased linearly as training progresses: ;

  • exp-schedule — The threshold is increased most rapidly at the end of the training: .

Intuitively, when the model is prone to overfit, e.g., when the problem is relatively easy or the number of labeled examples is very limited, the exp-schedule is the most suitable one as the supervised signal is mostly released at the end of training. Following a similar logic, when the model is less likely to overfit (e.g., when we have abundant labeled examples or when the model employs effective regularizations), the log-schedule can serve well.

Figure 3: Three schedules of TSA. We set , so that is increased from to when goes from to .

3.2 Sharpening Predictions

We observe that the predicted distributions on unlabeled examples and augmented unlabeled examples tend to be over-flat across categories, in cases where the problem is hard and the number of labeled examples is very small. Consequently, the unsupervised training signal from the KL divergence is relatively weak and thus gets dominated by the supervised part. For example, on ImageNet, when we use 10% of the labeled set, the predicted distributions on unlabeled examples are much less sharp than the distributions on labeled examples. Therefore, we find it helpful to sharpen the predicted distribution produced on unlabeled examples and employ the following three intuitive techniques:

  • [leftmargin=*]

  • Confidence-based masking — We find it to be helpful to mask out examples that the current model is not confident about. Specifically, in each minibatch, the consistency loss term is computed only on examples whose highest probability, i.e., , is greater than a threshold.

  • Entropy minimization — Entropy minimization (Grandvalet and Bengio, 2005) regularizes the predicted distribution on augmented examples to have a low entropy. To employ this technique, we add an entropy term to the overall objective.

  • Softmax temperature controlling — We tune the Softmax temperature when computing the predictions on original examples . Specifically, is computed as where

    denotes the logits and

    is the temperature. A lower temperature corresponds to a sharper distribution.

In practice, we find combining confidence-based masking and softmax temperature controlling to be most effective for settings with very small amount of labeled data, while entropy minimization works well for cases with relatively larger amount of labeled data.

3.3 Domain-relevance Data Filtering

Ideally, we would like to make use of out-of-domain unlabeled data since it is usually much easier to collect, but the class distributions of out-of-domain data are usually mismatched with those of in-domain data. Due to the mismatched class distributions, using out-of-domain unlabeled data can hurt the performance than not using it (Oliver et al., 2018)

. To obtain data relevant to the domain for the task at hand, we adopt a common technique for detecting out-of-domain data. We use our baseline model trained on the in-domain data to infer the labels of data in a large out-of-domain dataset and pick out the examples (equally distributed among classes) that the model is most confident about. Specifically, for each category, we sort all out-of-domain examples based on the classified probabilities of being in that category and select the examples with the highest probabilities.

4 Experiments

We apply UDA to a variety of language and vision tasks. Specifically, we show experiments on six text classification tasks in Section 4.1. Then, in Section 4.2, we compare UDA with other semi-supervised learning methods on standard vision benchmarks, CIFAR-10 and SVHN. Lastly, we evaluate UDA on ImageNet in Section 4.3 and provide ablation studies for TSA and augmentation methods in Section 4.4. We only present the information necessary to compare the empirical results here and refer readers to Appendix D and the code for implementation details.

4.1 Text Classification Experiments

Datasets.

We conduct experiments on six language datasets including IMDb, Yelp-2, Yelp-5, Amazon-2, Amazon-5 and DBPedia (Maas et al., 2011; Zhang et al., 2015), where DBPedia contains Wikipedia pages for category classifications and all other datasets are about sentiment classifications on different domains. In our semi-supervised setting, we set the number of supervised examples to 20 for binary sentiment classification tasks including IMDb, Yelp-2 and Amazon-2. For the five-way classification datasets Yelp-5 and Amazon-5, we use 2,500 examples (i.e., 500 examples per class). Finally, although DBPedia has categories, the problem is relatively simple. Hence, we set the number of training examples per class to 10. For unlabeled data, we use the whole training set for DBPedia and the concatenation of the training set and the unlabeled set for IMDb. We obtain large datasets of Yelp reviews and Amazon reviews (McAuley et al., 2015) as the unlabeled data for Yelp-2 Yelp-5, Amazon-2 and Amazon-5.333https://www.kaggle.com/yelp-dataset/yelp-dataset, http://jmcauley.ucsd.edu/data/amazon/

Experiment settings.

We adopt the Transformer model (Vaswani et al., 2017) used in BERT (Devlin et al., 2018) as our baseline model due to its great performances on many tasks. Then, we consider four initialization schemes. Specifically, we initialize our models either with (a) random Transformer, (b) BERT, (c) BERT or (d) BERT: BERT fine-tuned on in-domain unlabeled data. The latter finetuning strategy is motivated by the fact that ELMo (Peters et al., 2018) and ULMFiT (Howard and Ruder, 2018) show that fine-tuning language models on domain specific data can lead to performance improvements. We do not pursue further experiments for BERT on DBPedia since fine-tuning BERT on DBPedia does not result in better performance than BERT in our preliminary experiments. This is probably due to the fact that DBPedia is on the Wikipedia domain and BERT is already trained on the whole Wikipedia corpus. In all these four settings, we compare the performance with and without UDA.

Main results.

The results for text classification are shown in Table 1 with three key observations.

  • [leftmargin=*,itemsep=0em,topsep=0em]

  • Firstly, UDA consistently improves the performance regardless of the model initialization scheme. Most notably, even when BERT is further finetuned on in-domain data, UDA can still significantly reduce the error rate from to on IMDb. This result shows that the benefits UDA provides are complementary to that of representation learning.

  • Secondly, with a significantly smaller amount of supervised examples, UDA can offer decent or even competitive performances compared to the SOTA model trained with full supervised data. In particular, on binary sentiment classification tasks, with only 20 supervised examples, UDA outperforms the previous SOTA trained on full supervised data on IMDb and gets very close on Yelp-2 and Amazon-2.

  • Finally, we also note that five-category sentiment classification tasks turn out to be much more difficult than their binary counterparts and there still exists a clear gap between UDA with 500 labeled examples per class and BERT trained on the entire supervised set. This suggests a room for further improvement in the future.

Fully supervised baseline
Datasets IMDb Yelp-2 Yelp-5 Amazon-2 Amazon-5 DBpedia
(# Sup examples) (25k) (560k) (650k) (3.6m) (3m) (560k)
Pre-BERT SOTA 4.32 2.16 29.98 3.32 34.81 0.70
BERT 4.51 1.89 29.32 2.63 34.17 0.64
Semi-supervised setting
Initialization UDA IMDb Yelp-2 Yelp-5 Amazon-2 Amazon-5 DBpedia
(20) (20) (2.5k) (20) (2.5k) (140)
Random 43.27 40.25 50.80 45.39 55.70 41.14
25.23 8.33 41.35 16.16 44.19 7.24
BERT 27.56 13.60 41.00 26.75 44.09 2.58
5.45 2.61 33.80 3.96 38.40 1.33
BERT 11.72 10.55 38.90 15.54 42.30 1.68
4.78 2.50 33.54 3.93 37.80 1.09
BERT. 6.50 2.94 32.39 12.17 37.32 -
4.20 2.05 32.08 3.50 37.12 -
Table 1: Error rates on text classification datasets. In the fully supervised settings, the pre-BERT SOTAs include ULMFiT (Howard and Ruder, 2018) for Yelp-2 and Yelp-5, DPCNN (Johnson and Zhang, 2017) for Amazon-2 and Amazon-5, Mixed VAT (Sachan et al., 2018) for IMDb and DBPedia.

Results with different labeled set sizes.

We also evaluate the performance of UDA with different numbers of supervised examples. As shown in Figure 4, UDA leads to consistent improvements for all labeled set sizes. In the large-data regime, with the full training set of IMDb, UDA also provides robust gains. On Yelp-2, with examples, UDA outperforms the previous SOTA model trained with examples.

(a) IMDb
(b) Yelp-2
Figure 4: Accuracy on IMDb and Yelp-2 with different number of labeled examples.

4.2 Comparison with semi-supervised learning methods

Experiment settings.

Following the standard semi-supervised learning setting, we compare UDA with prior works on CIFAR-10 (Krizhevsky and Hinton, 2009) and SVHN (Netzer et al., 2011). We follow the settings in (Oliver et al., 2018) and employ Wide-ResNet-28-2 (Zagoruyko and Komodakis, 2016; He et al., 2016b) as our baseline model. We compare UDA with Pseudo-Label (Lee, 2013), an algorithm based on self-training, Virtual adversarial training (VAT) (Miyato et al., 2018), an algorithm that generates adversarial Gaussian perturbations on input, -Model (Laine and Aila, 2016), which combines simple input augmentation with hidden state perturbations, Mean Teacher (Tarvainen and Valpola, 2017), which enforces smoothness on model parameters and MixMatch (Berthelot et al., 2019), a concurrent work that unifies several prior works on semi-supervised learning.

Comparisons with existing semi-supervised learning methods.

In Figure 5, we compare UDA with existing works with varied sizes of labeled examples. UDA outperforms all existing methods with a clear margin, including MixMatch (Berthelot et al., 2019), a concurrent work on semi-supervised learning. For example, with 250 examples, UDA achieves an error rate of on CIFAR-10 and on SVHN while MixMatch has an error rate of on CIFAR-10 and on SVHN. More interestingly, UDA matches the performance of models trained on the full supervised data when AutoAugment is not employed. In particular, UDA achieves an error rate of on CIFAR-10 with 4,000 labeled examples and an error rate of on SVHN with 250 labeled examples matching the performance of our fully supervised model without AutoAugment, which achieves an error rate of on CIFAR-10 with 50,000 labeled examples and an error rate of on SVHN with 73,257 labeled examples.444The fully supervised baseline performance of Wide-ResNet-28-2 in MixMatch is higher than ours, although the performance of our implementation matches the performance reported in prior works. We hypothesize that the difference is due to the fact that MixMatch employs Exponential Moving Average of model parameters. Note that the difference of UDA and VAT is essentially the perturbation process. While the perturbations produced by VAT often contain high-frequency artifacts that do not exist in real images, data augmentation mostly generates diverse and realistic images. The performance difference between UDA and VAT shows the superiority of data augmentation based perturbation.

In Appendix C, we also compare UDA with recently proposed methods including ICT (Verma et al., 2019) and mixmixup (Hataya and Nakayama, 2019)

which enforce interpolation smootheness similar to mixup 

(Zhang et al., 2017) and LGA + VAT (Jackson and Schulman, 2019), an algorithm based on gradient similarity. UDA outperforms all previous approaches and reduces more than 30% of the error rates of state-of-the-art methods.

(a) CIFAR-10
(b) SVHN
Figure 5: Comparison with semi-supervised learning methods on CIFAR-10 and SVHN with varied number of labeled examples. The performances of -Model, Pseudo-Label, VAT and Mean Teacher are reported in (Berthelot et al., 2019).

Results with more advanced architectures.

We also test whether UDA can benefit from more advanced architectures by using Shake-Shake (26 2x96d) (Gastaldi, 2017) and PyramidNet+ShakeDrop (Yamada et al., 2018) instead of Wide-ResNet-28-2. As shown in Table 2, UDA achieves an error rate of when we employ PyramidNet+ShakeDrop, matching the performance of the fully supervised model without AutoAugment, and outperforming the best semi-supervised learning result with an error rate of achieved by MixMatch.

Methods # Sup Wide-ResNet-28-2 Shake-Shake ShakeDrop
Supervised 50k 5.4 2.9 2.7
AutoAugment 4.3 2.0 1.5
UDA 4k 5.3 3.6 2.7
Table 2: Error rates on CIFAR-10 with different models. # Sup denotes the number of supervised examples. ShakeDrop denotes PyramidNet+ShakeDrop. With only 4,000 labeled examples, UDA matches the performance of fully supervised models without AutoAugment for Wide-ResNet-28-2 and PyramidNet+ShakeDrop.

4.3 ImageNet experiments

In previous sections, all datasets we consider have a relatively small number of training examples and classes. In addition, we only use in-domain unlabeled data in previous experiments, where the class distribution of the unlabeled data always match with that of labeled data. In order to further test whether UDA can still excel on larger and more challenging datasets, we conduct experiments on ImageNet (Deng et al., 2009). We also develop a method to apply UDA on out-of-domain unlabeled data, which leads to performance improvements when we use the whole ImageNet as the supervised data.

Experiment settings.

To provide an informative evaluation, we conduct experiments on two settings with different numbers of supervised examples: (a) We use 10% of the supervised data of ImageNet while using all other data as unlabeled data, (b) Secondly, we consider the fully supervised scenario where we keep all images in ImageNet as supervised data and obtain extra unlabeled data from the JFT dataset (Hinton et al., 2015; Chollet, 2017). We use the domain-relevance data filtering method to filter out 1.3M images from the JFT dataset.

Results.

As shown in Table 4, for the 10% supervised data setting, when compared to the supervised baseline, UDA improves the top-1 and top-5 accuracy from to and from to respectively. When compared with VAT + EntMin, a prior work on semi-supervised learning, UDA improves the top-5 accuracy from to . As for the full ImageNet setting shown in Table 4, when compared with AutoAugment, UDA improves the baseline top-1 accuracy from to and improves the top-5 accuracy from to , with only M more unlabeled data. We expect that there will be further improvements with more unlabeled data, which we leave as future works.

Methods top-1 acc top-5 acc Supervised 55.09 77.26 Pseudo-Label (Lee, 2013) - 82.41 VAT (Miyato et al., 2018) - 82.78 VAT + EntMin (Miyato et al., 2018) - 83.39 UDA 68.66 88.52
Table 3: Accuracy on ImageNet with 10% of the labeled set. We set the image size to . The results with is reported in (Zhai et al., 2019).
Methods top-1 / top-5 accuracy Supervised 77.28 / 93.73 AutoAugment 78.28 / 94.36 UDA 79.04 / 94.45
Table 4: Accuracy on the full ImageNet with image size . We use the ImageNet dataset and another M unlabeled images from the JFT dataset as the unlabeled data.

4.4 Ablation Studies

In this section, we provide an analysis of when and how to use TSA and effects of different augmentation methods for researchers and practitioners.

Ablations on Training Signal Annealing (TSA).

We study the effect of TSA on two tasks with different amounts of unlabeled data: (a) Yelp-5: on this text classification task, we have about M unlabeled examples while only having k supervised examples. We do not initialize the network by BERT in this study to rule out factors of having a pre-trained representation, (b) CIFAR-10: we have k unlabeled examples while having k labeled examples.

As shown in Table 6, on Yelp-5, where there is a lot more unlabeled data than supervised data, TSA reduces the error rate from to when compared to the baseline without TSA. More specifically, the best performance is achieved when we choose to postpone releasing the supervised training signal to the end of the training, i.e, exp-schedule leads to the best performance. On the other hand, linear-schedule is the sweet spot on CIFAR-10 in terms of the speed of releasing supervised training signals, where the amount of unlabeled data is not a lot larger than that of supervised data.

Ablations on augmentation methods.

Targeted augmentation methods such as AutoAugment have been shown to lead to significant performance improvements in supervised learning. In this study, we would like to investigate whether targeted augmentations are effective when applied to unlabeled data and whether improvements of augmentations in supervised learning can lead to improvements in our semi-supervised learning setting.

Firstly, as shown in Table 6, if we apply the augmentation policy found on SVHN by AutoAugment to CIFAR-10 (denoted by Switched Augment), the error rate increases from to , which demonstrates the effectiveness of targeted data augmentations. Further, if we remove AutoAugment and only use Cutout, the error rate increases to . Finally, the error rate increases to if we only use simple cropping and flipping as the augmentation. On SVHN, the effects of different augmentations are similar. These results show the importance of applying augmentation methods targeted at each task to inject the most needed inductive biases.

We also observe that the effectiveness of augmentation methods on supervised learning settings transfers to our semi-supervised settings. Specifically, in the fully supervised learning settings,  Cubuk et al. (2018)

also show that AutoAugment improves upon Cutout and that Cutout is more effectively than basic augmentations, which aligns well with the observations in semi-supervised settings. In our preliminary experiments for sentiment classifications, we have also found that, both in supervised learning settings and unsupervised learning settings, back-translation works better than simple word dropping or word replacing although word dropping or replacing can lead to improved performance on the purely supervised baseline.

TSA schedule Yelp-5 CIFAR-10 50.81 5.67 log-schedule 49.06 5.41 linear-schedule 45.41 5.10 exp-schedule 41.35 7.25

Table 5: Ablation study for Training Signal Annealing (TSA) on Yelp-5 and CIFAR-10. The shown numbers are error rates.
Augmentation CIFAR-10 SVHN Cropping & flipping 16.17 8.27 Cutout 6.42 3.09 Switched Augment 5.59 2.74 AutoAugment 5.10 2.22
Table 6: Ablation study for data augmentation methods. Switched Augment means to apply the policy found by AutoAumgent on SVHN to CIFAR-10 and vice versa.

5 Related Work

Due to the space limits, we only discuss the most relevant works here and refer readers to Appendix A

for the complete related work. Most related to our method is a line of work that enforces classifiers to be smooth with respect to perturbations applied to the input examples or hidden representations. As explained earlier, works in this family mostly differ in how the perturbation is defined: Pseudo-ensemble 

(Bachman et al., 2014) directly applies Gaussian noise; -Model (Laine and Aila, 2016) combines simple input augmentation with hidden state noise; VAT (Miyato et al., 2018, 2016) defines the perturbation by approximating the direction of change in the input space that the model is most sensitive to; Cross-view training (Clark et al., 2018) masks out part of the input data; Sajjadi et al. (2016)

combines dropout and random max-pooling with affine transformation applied to the data as the perturbations. Apart from enforcing smoothness on the input examples and the hidden representations, another line of research enforces smoothness on the model parameter space. Works in this category include Mean Teacher 

(Tarvainen and Valpola, 2017), fast-Stochastic Weight Averaging (Athiwaratkun et al., 2018) and Smooth Neighbors on Teacher Graphs (Luo et al., 2018).

6 Conclusion

In this paper, we show that data augmentation and semi-supervised learning are well connected: better data augmentation can lead to significantly better semi-supervised learning. Our method, UDA, employs highly targeted data augmentations to generate diverse and realistic perturbations and enforces the model to be smooth with respect to these perturbations. We also propose a technique called TSA that can effectively prevent UDA from overfitting the supervised data, when a lot more unlabeled data is available. For text, UDA combines well with representation learning, e.g., BERT, and is very effective in low-data regime where state-of-the-art performance is achieved on IMDb with only 20 examples. For vision, UDA reduces error rates by more than 30% in heavily-benchmarked semi-supervised learning setups. Lastly, UDA can effectively leverage out-of-domain unlabeled data and achieve improved performances on ImageNet where we have a large amount of supervised data.

Acknowledgements

We want to thank Hieu Pham, Adams Wei Yu and Zhilin Yang for their tireless help to the authors on different stages of this project and thank Colin Raffel for pointing out the connections between our work and previous works. We also would like to thank Olga Wichrowska, Ekin Dogus Cubuk, Jiateng Xie, Guokun Lai, Yulun Du, Trieu Trinh, Ran Zhao, Ola Spyra, Brandon Yang, Daiyi Peng, Andrew Dai, Samy Bengio, Jeff Dean and the Google Brain team for insightful discussions and support to the work.

References

Appendix A Complete Related Work

Due to the long history of semi-supervised learning (SSL), we refer readers to [Chapelle et al., 2009] for a general review. More recently, many efforts have been made to renovate classic ideas into deep neural instantiations. For example, graph-based label propagation [Zhu et al., 2003] has been extended to neural methods via graph embeddings [Weston et al., 2012, Yang et al., 2016] and later graph convolutions [Kipf and Welling, 2016]. Similarly, with the variational auto-encoding framework and reinforce algorithm, classic graphical models based SSL methods with target variable being latent can also take advantage of deep architectures [Kingma et al., 2014, Maaløe et al., 2016, Yang et al., 2017]. Besides the direct extensions, it was found that training neural classifiers to classify out-of-domain examples into an additional class [Salimans et al., 2016] works very well in practice. Later, Dai et al. [2017] shows that this can be seen as an instantiation of low-density separation.

Apart from enforcing smoothness on the input examples and the hidden representations, another line of research enforces smoothness of model parameters, which is complementary to our method. For example, Mean Teacher [Tarvainen and Valpola, 2017] maintains a teacher model with parameters being the ensemble of a student model’s parameters and enforces the consistency between the predictions by the teacher model and the student model. Recently,  Athiwaratkun et al. [2018] propose fast-Stochastic Weight Averaging that improves -Model and Mean Teacher by encouraging the model to explore a diverse set of plausible parameters. Smooth Neighbors on Teacher Graphs [Luo et al., 2018] construct a similarity graph between unlabeled examples and combines input-level smoothness with model-level smoothness. It is worth noting that input-level smoothness and model-level smoothness usually jointly contribute to the SOTA performance in supervised learning and hence combining them might lead to further performance improvements in the semi-supervised setting.

Also related to our work is the field of data augmentation research. Besides the conventional approaches and two data augmentation methods mentioned in Section 2.1, a recent approach MixUp [Zhang et al., 2017] goes beyond data augmentation from a single data point and performs interpolation of data pairs to achieve augmentation. Recently,  Hernández-García and König [2018] have shown that data augmentation can be regarded as a kind of explicit regularization methods similar to Dropout. Back translation [Sennrich et al., 2015, Edunov et al., 2018] and dual learning [He et al., 2016a, Cheng et al., 2016] can be regarded as performing data augmentation on monolingual data and have been shown to improve the performance for machine translation.  Hu et al. [2017] applies the consistency loss on augmented examples and achieve great performances on clustering and unsupervised hash learning. UDA is also well-connected to invariant representation learning  [Liang et al., 2018, Salazar et al., 2018] where the consistency loss is not only applied to the output layer but is also used for feature matching.

Diverse paraphrases generated by back translation has been a key component in the improved performance of our text classification experiments. We use random sampling instead of beam search for decoding similar to the work by Edunov et al. [2018]. There are also recent works on generating diverse translations [He et al., 2018, Shen et al., 2019, Kool et al., 2019] that might lead to further improvements when used as data augmentations.

Apart from semi-supervised learning, unsupervised representation learning offers another way to utilize unsupervised data. Collobert and Weston [2008] demonstrated that word embeddings learned by language modeling can improve the performance significantly on semantic role labeling. Later, the pre-training of word embeddings was simplified and substantially scaled in Word2Vec [Mikolov et al., 2013] and Glove [Pennington et al., 2014]. More recently, Dai and Le [2015], Peters et al. [2018], Radford et al. [2018], Howard and Ruder [2018], Devlin et al. [2018] have shown that pre-training using language modeling and denoising auto-encoding leads to significant improvements on many tasks in the language domain. There is also a growing interest in self-supervised learning for vision [Zhai et al., 2019, Hénaff et al., 2019, Trinh et al., 2019]. In Section 4.1, we show that the proposed method and unsupervised representation learning can complement each other and jointly yield the state-of-the-art results.

Appendix B Details for TF-IDF based word replacing for Text Classification

We describe the TF-IDF based word replacing data augmentation method in this section. Ideally, we would like the augmentation method to generate both diverse and valid examples. Hence, the augmentation is designed to retain keywords and replace uninformative words with other uninformative words.

Specifically, Suppose is the IDF score for word computed on the whole corpus, and is the TF score for word in a sentence. We compute the TF-IDF score as . Suppose the maximum TF-IDF score in a sentence is . The probability for having word replaced is set to , where

is a per-token replacement probability hyperparameter and

is a normalization term. We would like each word to have a probability of of being replaced in expectation. Hence, we set to where is the length of sentence . We clip to be 1 when it is greater than 1.

When a word is replaced, we sample another word from the whole vocabulary for the replacement. Intuitively, the new words should not be keywords to prevent changing the ground-truth labels of the sentence. To measure if a word is keyword, we compute a score of each word on the whole corpus. Specifically, we compute the score as where is the frequency of word on the whole corpus. We set the probability of sampling word as where is a normalization term.

Appendix C Additional Results on CIFAR-10 and SVHN

c.1 CIFAR-10 with 4,000 examples and SVHN with 1,000 examples

We present comparisons with the results reported by Oliver et al. [2018] and three recent works [Verma et al., 2019, Hataya and Nakayama, 2019, Jackson and Schulman, 2019]. The results are shown in Table 7. When compared with the previous SOTA model ICT [Verma et al., 2019], UDA reduces the error rate from to on CIFAR-10 and from to on SVHN, marking a relative reduction of and , respectively.

Methods CIFAR-10 SVHN
(4k) (1k)
Supervised 20.26 0.38 12.83 0.47
AutoAugment [Cubuk et al., 2018] 14.1 8.2
Pseudo-Label [Lee, 2013] 17.78 0.57 7.62 0.29
-Model [Laine and Aila, 2016] 16.37 0.63 7.19 0.27
Mean Teacher [Tarvainen and Valpola, 2017] 15.87 0.28 5.65 0.47
VAT [Miyato et al., 2018] 13.86 0.27 5.63 0.20
VAT + EntMin [Miyato et al., 2018] 13.13 0.39 5.35 0.19
LGA + VAT [Jackson and Schulman, 2019] 12.06 0.19 6.58 0.36
mixmixup [Hataya and Nakayama, 2019] 10 -
ICT [Verma et al., 2019] 7.66 0.17 3.53 0.07
UDA 5.27 0.11 2.46 0.17
Table 7: Comparison with existing methods on CIFAR-10 and SVHN with and examples respectively. Results marked with are reported by [Oliver et al., 2018]. All compared methods use a common architecture WRN-28-2 with 1.46M parameters except AutoAugment. AutoAugment also does not use unlabeled data.

c.2 Results with different labeled set sizes

Cifar-10

In Table 8, we show results for compared methods of Figure 4(a). Pure supervised learning using 50,000 examples achieves an error rate of 5.36 without AutoAugment and 4.26 with AutoAugment. The performance of the baseline models are reproduced by MixMatch [Berthelot et al., 2019].

Methods / # Sup 250 500 1,000 2,000 4,000
-Model
Pseudo-Label
VAT
Mean Teacher
MixMatch
UDA 8.41 0.46 6.91 0.23 6.39 0.32 5.84 0.17 5.27 0.11
Table 8: Error rate (%) for CIFAR10.

Svhn

In Table 9, we show results for compared methods of Figure 4(b). Pure supervised learning using 73,257 examples achieves an error rate of 2.84 without AutoAugment and 2.13 with AutoAugment.

Methods / # Sup 250 500 1,000 2,000 4,000
-Model
Pseudo-Label
VAT
Mean Teacher
MixMatch
UDA 2.85 0.15 2.59 0.15 2.46 0.17 2.32 0.08 2.32 0.05
Table 9: Error rate for SVHN.

Appendix D Experiment Details

In this section, we provide experiment details for the considered experiment settings.

d.1 Text Classifications

Preprocessing. For all text classification datasets, we truncate the input to 512 subwords since BERT is pretrained with a maximum sequence length of 512, which leads to better performances than using a sequence length of 256 or 128. Further, when the length of an example is greater than 512, we keep the last 512 subwords instead of the first 512 subwords as keeping the latter part of the sentence lead to better performances on IMDb.

Fine-tuning BERT on in-domain unsupervised data. We fine-tune the BERT model on in-domain unsupervised data using the code released by BERT. We try learning rate of 2e-5, 5e-5 and 1e-4, batch size of 32, 64 and 128 and number of training steps of 30k, 100k and 300k. We pick the fine-tuned models by the BERT loss on a held-out set instead of the performance on a downstream task.

Random initialized Transformer. For the experiments with randomly initialized Transformer, we adopt hyperparameters for BERT base except that we only use 6 hidden layers and 8 attention heads. We also increase the dropout rate on the attention and the hidden states to 0.2, When we train UDA with randomly initialized architectures, we train UDA for 500k or 1M steps on Amazon-5 and Yelp-5 where we have abundant unlabeled data.

BERT hyperparameters. Following the common BERT fine-tuning procedure, we keep a dropout rate of 0.1, and try learning rate of 1e-5, 2e-5 and 5e-5 and batch size of 32 and 128. We also tune the number of steps ranging from 30 to 100k for various data sizes.

UDA hyperparameters. We set the weight on the unsupervised objective to 1 in all of our experiments. We use a batch size of 32 for the supervised objective since 32 is the smallest batch size on v3-32 Cloud TPU Pod. We use a batch size of 224 for the unsupervised objective when the Transformer is initialized with BERT so that the model can be trained on more unlabeled data. We find that generating one augmented example for each unlabeled example is enough for BERT.

All experiments in this part are performed on a v3-32 Cloud TPU Pod.

d.2 CIFAR-10 and SVHN

For hyperparameter tuning, we follow Oliver et al. [2018] and only tune the learning rate and hyperparameters for our unsupervised objective. Other hyperparameters follow those of the released AutoAugment code. Since there are many more unlabeled examples than labeled examples, we use a larger batch size for the unsupervised objective. For example, in our CIFAR-10 experiments on TPUs, we use a batch size of 32 for the supervised loss and use a batch size of 960 for the unsupervised loss. We train the model for 100k steps and the weight on the unsupervised objective

is set to 1. On GPUs, we find it work well to use a batch size of 64 and 320 for the supervised loss and the unsupervised loss respectively and train for 400k steps. We generate 100 augmented examples for each unlabeled example. For the benchmark with 4,000 examples on CIFAR-10 and 1,000 examples on SVHN, we use the same examples which AutoAugment finds its optimal policy on, since AutoAugment finds the optimal policies using 4,000 supervised examples in CIFAR-10 and 1,000 supervised examples in SVHN. We report the average performance and the standard deviation for ten runs.

For the experiments with Shake-Shake, we train UDA for 300k steps and use a batch size of 128 for the supervised objective and use a batch size of 512 for the unsuperivsed objective. For the experiments with PyramidNet+ShakeDrop, we train UDA for 700k steps and use a batch size of 64 for the supervised objective and a batch size of 128 for the unsupervised objective. For both models, we use a learning rate of 0.03 and use a cosine learning decay with one annealing cycle following AutoAugment.

All experiments in this part are performed on a v3-32 Cloud TPU v3 Pod.

d.3 ImageNet

Unless otherwise stated, we follow the standard hyperparameters used in an open-source implementation of ResNet.555https://github.com/tensorflow/tpu/tree/master/models/official/resnet For the 10% labeled set setting, we use a batch size of 512 for the supervised objective and a batch size of 15,360 for the unsupervised objective. We use a base learning rate of 0.3 that is decayed by 10 for four times and set the weight on the unsupervised objective to 20. We set the threshold to 0.5 for the confidence-based filtering and set the Softmax temperature to 0.4. The model is trained for 40k steps. Experiments in this part are performed on a v3-64 Cloud TPU v3 Pod.

For experiments on the full ImageNet, we use a batch size of 8,192 for the supervised objective and a batch size of 16,384 for the unsupervised objective. The weight on the unsupervised objective is set to 1. We use entropy minimization to sharpen the prediction. We use a base learning rate of 1.6 and decay it by 10 for four times. Experiments in this part are performed on a v3-128 Cloud TPU v3 Pod.