Temporal Ensembling for Semi-Supervised Learning

by   Samuli Laine, et al.

In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 18.63 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.


page 1

page 2

page 3

page 4


Learning to Impute: A General Framework for Semi-supervised Learning

Recent semi-supervised learning methods have shown to achieve comparable...

A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

The recent success of deep neural networks is powered in part by large-s...

A function approximation approach to the prediction of blood glucose levels

The problem of real time prediction of blood glucose (BG) levels based o...

Semi-Supervised Learning with Multi-Head Co-Training

Co-training, extended from self-training, is one of the frameworks for s...

Smooth Neighbors on Teacher Graphs for Semi-supervised Learning

The paper proposes an inductive semi-supervised learning method, called ...

Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training

We introduce the use of Bayesian optimal experimental design techniques ...

Analysis of p-Laplacian Regularization in Semi-Supervised Learning

We investigate a family of regression problems in a semi-supervised sett...

1 Introduction

It has long been known that an ensemble of multiple neural networks generally yields better predictions than a single network in the ensemble. This effect has also been indirectly exploited when training a single network through dropout (srivastava2014), dropconnect (dropconnect), or stochastic depth (stochdepth) regularization methods, and in swapout networks (swapout), where training always focuses on a particular subset of the network, and thus the complete network can be seen as an implicit ensemble of such trained sub-networks. We extend this idea by forming ensemble predictions during training, using the outputs of a single network on different training epochs and under different regularization and input augmentation conditions. Our training still operates on a single network, but the predictions made on different epochs correspond to an ensemble prediction of a large number of individual sub-networks because of dropout regularization.

This ensemble prediction can be exploited for semi-supervised learning where only a small portion of training data is labeled. If we compare the ensemble prediction to the current output of the network being trained, the ensemble prediction is likely to be closer to the correct, unknown labels of the unlabeled inputs. Therefore the labels inferred this way can be used as training targets for the unlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmentation. Indeed, without neither, there would be much less reason to place confidence in whatever labels are inferred for the unlabeled training data.

We describe two ways to implement self-ensembling, -model and temporal ensembling. Both approaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin. We furthermore observe that self-ensembling improves the classification accuracy in fully labeled cases as well, and provides tolerance against incorrect labels.

The recently introduced transform/stability loss of sajjadi16 is based on the same principle as our work, and the -model can be seen as a special case of it. The -model can also be seen as a simplification of the -model of the ladder network by ladder, a previously presented network architecture for semi-supervised learning. Our temporal ensembling method has connections to the bootstrapping method of reed14 targeted for training with noisy labels.

2 Self-ensembling during training

We present two implementations of self-ensembling during training. The first one, -model, encourages consistent network output between two realizations of the same input stimulus, under two different dropout conditions. The second method, temporal ensembling, simplifies and extends this by taking into account the network predictions over multiple previous training epochs.

We shall describe our methods in the context of traditional image classification networks. Let the training data consist of total of inputs, out of which are labeled. The input stimuli, available for all training data, are denoted , where . Let set contain the indices of the labeled inputs, . For every , we have a known correct label , where is the number of different classes.

2.1 -model

Figure 1: Structure of the training pass in our methods. Top: -model. Bottom: temporal ensembling. Labels are available only for the labeled inputs, and the associated cross-entropy loss component is evaluated only for those.
Require:   = training stimuli
Require:   = set of training input indices with known labels
Require:   = labels for labeled inputs
Require:   = unsupervised weight ramp-up function
Require:   = stochastic neural network with trainable parameters
Require:   = stochastic input augmentation function
   for in do
      for each minibatch do
          evaluate network outputs for augmented inputs
          again, with different dropout and augmentation
          supervised loss component
unsupervised loss component
         update using, e.g., Adam update network parameters
      end for
   end for
Algorithm 1  -model pseudocode.

The structure of -model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. During training, we evaluate the network for each training input

twice, resulting in prediction vectors


. Our loss function consists of two components. The first component is the standard cross-entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs, penalizes different predictions for the same training input

by taking the mean square difference between the prediction vectors and .111Squared difference gave slightly but consistently better results than cross-entropy loss in our tests. To combine the supervised and unsupervised loss terms, we scale the latter by time-dependent weighting function . By comparing the entire output vectors and , we effectively ask the “dark knowledge” (Hinton15) between the two evaluations to be close, which is a much stronger requirement compared to asking that only the final classification remains the same, which is what happens in traditional training.

It is important to notice that, because of dropout regularization, the network output during training is a stochastic variable. Thus two evaluations of the same input under same network weights yield different results. In addition, Gaussian noise and augmentations such as random translation are evaluated twice, resulting in additional variation. The combination of these effects explains the difference between the prediction vectors and . This difference can be seen as an error in classification, given that the original input was the same, and thus minimizing it is a reasonable goal.

In our implementation, the unsupervised loss weighting function ramps up, starting from zero, along a Gaussian curve during the first 80 training epochs. See Appendix LABEL:sec:Training_parameters for further details about this and other training parameters. In the beginning the total loss and the learning gradients are thus dominated by the supervised loss component, i.e., the labeled data only. We have found it to be very important that the ramp-up of the unsupervised loss component is slow enough—otherwise, the network gets easily stuck in a degenerate solution where no meaningful classification of the data is obtained.

Our approach is somewhat similar to the -model of the ladder network by ladder, but conceptually simpler. In the -model, the comparison is done directly on network outputs, i.e., after softmax activation, and there is no auxiliary mapping between the two branches such as the learned denoising functions in the ladder network architecture. Furthermore, instead of having one “clean” and one “corrupted” branch as in -model, we apply equal augmentation and noise to the inputs for both branches.

As shown in Section 3, the -model combined with a good convolutional network architecture provides a significant improvement over prior art in classification accuracy.

2.2 Temporal ensembling

Analyzing how the

-model works, we could equally well split the evaluation of the two branches in two separate phases: first classifying the training set once without updating the weights

, and then training the network on the same inputs under different augmentations and dropout, using the just obtained predictions as targets for the unsupervised loss component. As the training targets obtained this way are based on a single evaluation of the network, they can be expected to be noisy. Temporal ensembling alleviates this by aggregating the predictions of multiple previous network evaluations into an ensemble prediction. It also lets us evaluate the network only once during training, gaining an approximate 2x speedup over the -model.

Require:   = training stimuli
Require:   = set of training input indices with known labels
Require:   = labels for labeled inputs
Require:   = ensembling momentum,
Require:   = unsupervised weight ramp-up function
Require:   = stochastic neural network with trainable parameters
Require:   = stochastic input augmentation function
    initialize ensemble predictions
    initialize target vectors
   for in do
      for each minibatch do
          evaluate network outputs for augmented inputs
          supervised loss component
unsupervised loss component
         update using, e.g., Adam update network parameters
      end for
       accumulate ensemble predictions
       construct target vectors by bias correction
   end for
Algorithm 2   Temporal ensembling pseudocode. Note that the updates of and could equally well be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.

The structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocode in Algorithm 2. The main difference to the -model is that the network and augmentations are evaluated only once per input per epoch, and the target vectors for the unsupervised loss component are based on prior network evaluations instead of a second evaluation of the network.

After every training epoch, the network outputs are accumulated into ensemble outputs by updating , where is a momentum term that controls how far the ensemble reaches into training history. Because of dropout regularization and stochastic augmentation, thus contains a weighted average of the outputs of an ensemble of networks from previous training epochs, with recent epochs having larger weight than distant epochs. For generating the training targets , we need to correct for the startup bias in by dividing by factor . A similar bias correction has been used in, e.g., Adam (adam)

and mean-only batch normalization 

(Salimans16). On the first training epoch, and are zero as no data from previous epochs is available. For this reason, we specify the unsupervised weight ramp-up function to also be zero on the first training epoch.

The benefits of temporal ensembling compared to -model are twofold. First, the training is faster because the network is evaluated only once per input on each epoch. Second, the training targets can be expected to be less noisy than with -model. As shown in Section 3, we indeed obtain somewhat better results with temporal ensembling than with -model in the same number of training epochs. The downside compared to

-model is the need to store auxiliary data across epochs, and the new hyperparameter

. While the matrix can be fairly large when the dataset contains a large number of items and categories, its elements are accessed relatively infrequently. Thus it can be stored, e.g., in a memory mapped file.

An intriguing additional possibility of temporal ensembling is collecting other statistics from the network predictions

besides the mean. For example, by tracking the second raw moment of the network outputs, we can estimate the variance of each output component

. This makes it possible to reason about the uncertainty of network outputs in a principled way (Gal16). Based on this information, we could, e.g., place more weight on more certain predictions vs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenues as future work.

3 Results

Our network structure is given in Table LABEL:tbl:network, and the test setup and all training parameters are detailed in Appendix LABEL:sec:Training_parameters. We test the

-model and temporal ensembling in two image classification tasks, CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different random seeds.

Although it is rarely stated explicitly, we believe that our comparison methods do not use input augmentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore we report the error rates without augmentation, unless explicitly stated otherwise. Given that the ability of an algorithm to extract benefit from augmentation is also an important property, we report the classification accuracy using a standard set of augmentations as well. In purely supervised training the de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and random translations, while SVHN is limited to random translations. By using these same augmentations we can compare against the best fully supervised results as well. After all, the fully supervised results should indicate the upper bound of obtainable accuracy.

3.1 Cifar-10

Table 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels).