Selfie: Self-supervised Pretraining for Image Embedding

by   Trieu H. Trinh, et al.

We introduce a pretraining technique called Selfie, which stands for SELF-supervised Image Embedding. Selfie generalizes the concept of masked language modeling to continuous data, such as images. Given masked-out patches in an input image, our method learns to select the correct patch, among other "distractor" patches sampled from the same image, to fill in the masked location. This classification objective sidesteps the need for predicting exact pixel values of the target patches. The pretraining architecture includes a network of convolutional blocks to process patches followed by an attention pooling network to summarize the content of unmasked patches before predicting masked ones. During finetuning, we reuse the convolutional weights found by pretraining. We evaluate our method on three benchmarks (CIFAR-10, ImageNet 32 x 32, and ImageNet 224 x 224) with varying amounts of labeled data, from 5 100 improvements to ResNet-50 across all settings compared to the standard supervised training of the same network. Notably, on ImageNet 224 x 224 with 60 examples per class (5 from 35.6 pretraining method also improves ResNet-50 training stability, especially on low data regime, by significantly lowering the standard deviation of test accuracies across datasets.



There are no comments yet.


page 4


Learning Heatmap-Style Jigsaw Puzzles Provides Good Pretraining for 2D Human Pose Estimation

The target of 2D human pose estimation is to locate the keypoints of bod...

Saccader: Improving Accuracy of Hard Attention Models for Vision

Although deep convolutional neural networks achieve state-of-the-art per...

Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation

Despite the outstanding success of self-supervised pretraining methods f...

Colorization as a Proxy Task for Visual Understanding

We investigate and improve self-supervision as a drop-in replacement for...

Context Autoencoder for Self-Supervised Representation Learning

We present a novel masked image modeling (MIM) approach, context autoenc...

Forecasting Urban Development from Satellite Images

Forecasting where and when new buildings will emerge is a rather unexplo...

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods

A recent line of work showed that various forms of convolutional kernel ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A weakness of neural networks is that they often require a large amount of labeled data to perform well. Although self-supervised/unsupervised representation learning 

(Hinton et al., 2006; Bengio et al., 2007; Raina et al., 2007; Vincent et al., 2010)

was attempted to address this weakness, most practical neural network systems today are trained with supervised learning (e.g., 

(Hannun et al., 2014; He et al., 2016a; Wu et al., 2016)). Making use of unlabeled data through unsupervised representation learning to improve data-efficiency of neural networks remains an open challenge for the field.

Recently, language model pretraining has been suggested as a method for unsupervised representation learning in NLP (Dai and Le, 2015; Ramachandran et al., 2017; Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019). Most notably, Devlin et al. (2019) made an observation that bidirectional representations from input sentences are better than left-to-right or right-to-left representations alone. Based on this observation, they proposed the concept of masked language modeling by masking out words in a context to learn representations for text, also known as BERT. This is crucially achieved by replacing the LSTM architecture with the Transformer feedforward architecture (Vaswani et al., 2017). The feedforward nature of the architecture makes BERT more ready to be applied to images. Yet BERT still cannot be used for images because images are continuous objects unlike discrete words in sentences. We hypothesize that bridging this last gap is key to translating the success of language model pretraining to the image domain.

In this paper, we propose a pretraining method called Selfie, which stands for SELF-supervised Image Embedding. Selfie generalizes BERT to continuous spaces, such as images. In Selfie

, we propose to continue to use classification loss because it is less sensitive to small changes in the image (such as translation of an edge) compared to regression loss which is more sensitive to small perturbations. Similar to BERT, we mask out a few patches in an image and try to reconstruct the original image. To enable the classification loss, we sample “distractor” patches from the same image, and ask the model to classify the right patch to fill in a target masked location.

Experiments show that Selfie works well across many datasets, especially when the datasets have a small number of labeled examples. On CIFAR-10, ImagetNet , and ImageNet , we observe consistent accuracy gains as we vary the amount of labeled data from 5% to 100% of the typical training sets. The gain tends to be biggest when the labeled set is small. For example, on ImageNet with only 60 labeled examples per class, pretraining using our method improves the mean accuracy of ResNet-50 by 11.1%, going from 35.6% to 46.7%. Additional analysis on ImageNet provides evidence that the benefit of self-supervised pretraining significantly takes off when there is at least an order of magnitude (10X) more unlabeled data than labeled data.

In addition to improving the averaged accuracy, pretraining ResNet-50 on unlabeled data also stabilizes its training on the supervised task. We observe this by computing the standard deviation of the final test accuracy across 5 different runs for all experiments. On CIFAR-10 with 400 examples per class, the standard deviation of the final accuracy reduces 3 times comparing to training with the original initialization method. Similarly, on ImageNet rescaled to , our pretraining process gives an 8X reduction on the test accuracy variability when training on 5% of the full training set.

2 Method

Figure 1: An overview of Selfie. (Left) During pretraining, our method makes use of an encoder-decoder architecture: the encoder takes in a set of square patches from the input image while the decoder takes in a different set. The encoder

builds a single vector

that represents all of its input patches using a patch processing network followed by an attention pooling network . The decoder then takes to predict its own input patches from their positions. Instead of predicting the actual pixel content, the decoder classifies the correct patch from negative examples (distractors) with a cross-entropy loss. In our implementation, we use the first three blocks of ResNet-50 (equivalent to ResNet-36) for and Transformer layers (Vaswani et al., 2017) for . Square patches are processed independently by to produce a feature vector per patch. (Right) During finetuning, ResNet-50 is applied on the full image. Its first three blocks are initialized from the pretrained , and the network is finetuned end-to-end.

An overview of Selfie is shown in Figure 1. Similar to previous works in unsupervised/self-supervised representation learning, our method also has two stages: (1) Pretrain the model on unlabeled data and then (2) Finetune on the target supervised task. To make it easy to understand, let us first focus on the fine-tuning stage. In this paper, our goal is to improve ResNet-50, so we will pretrain the first three blocks of this architecture.111In our experiments, we found using the first three convolution blocks gives similar results to the full network (4 convolution blocks). During pretraining, therefore, only the first three blocks (i.e. ResNet-36) are used to save computation and memory load. Let us call this network . The pretraining stage is therefore created for training this network in an unsupervised fashion.

Now let us focus on the pretraining stage. In the pretraining stage, , a patch processing network, will be applied to small patches in an image to produce one feature vector per patch for both the encoder and the decoder. In the encoder, the feature vectors are pooled together by an attention pooling network to produce a single vector . In the decoder, no pooling takes place; instead the feature vectors are sent directly to the computation loss to form an unsupervised classification task. The representations from the encoder and decoder networks are jointly trained during pretraining to predict what patch is being masked out at a particular location among other distracting patches.

In our implementation, to make sure the distracting patches are hard, we sample them from the same input image and also mask them out in the input image. Next we will describe in detail the interaction between the encoder and decoder networks during pretraining as well as different design choices.

2.1 Pretraining Details

The main idea is to use a part of the input image to predict the rest of the image during this phase. To do so, we first sample different square patches from the input. These patches are then routed into the encoder and decoder networks depending on whether they are randomized to be masked out or not. Let us take Figure 1 as an example, where are sent into the encoder, whereas are sent into the decoder.

All the patches are processed by the same patch processing network . On the encoder side, the output vectors produced by are routed into the attention pooling network to summarize these representations into a single vector . On the decoder side, creates output vectors , , . The decoder then queries the encoder by adding to the output vector the location embedding of a patch, selected at random among the patches in the decoder (e.g., ) to create a vector . The vector is then used in a dot product to compute the similarity between and each . Having seen the dot products between and ’s, the decoder has to decide which patch is most relevant to fill in the chosen location (at ). The cross entropy loss is applied for this classification task, whereas the encoder and decoder are trained jointly with gradients back-propagated from this loss.

During this pretraining process, the encoder network learns to compress the information in the input image to a vector such that when seeded by a location of a missing patch, it can recover that patch accurately. To perform this task successfully, the network needs to understand the global content of the full image, as well as the local content of each individual patch and their relative relationship. This ability proves to be useful in the downstream task of recognizing and classifying objects.

Patch sampling method.

On small images of size , we use a patch size of , while on larger images of size , we use a patch size of . The patch size is intentionally selected to divide the image evenly, so that the image can be cut into a grid as illustrated in Figure 1

. To add more randomness to the position of the image patches, we perform zero padding of 4 pixels on images with size

and then random crop the image to its original size.

Figure 2: From left to right: Improvement in predictions during our pretraining process on ImageNet . The patch size for this dataset is , which resulted in a grid of patches. The masked-out patches are highlighted with a border of color white or red. The model is trained to put the masked-out patches back into their original slots. A border of color red indicates wrong prediction from the model. Here we display four different samples with the same masking positions fixed throughout the training process. At the beginning, the orders of the patches are mostly incorrect due to random initialization of the model. During training, the model learns to classify more correctly. As pretraining progresses from left to right, the model makes less error, while the mistakes made in later stages usually confuse between patches that have similar content. For example, the generic texture of the sky (row 3, step 2K), water (row 4, step 116K) or trees (row 2, step 10K) are generally interchangeable across locations.

Patch processing network.

In this work, we focus on improving ResNet-50 (He et al., 2016a) on various benchmarks by pretraining it on unlabeled data. For this reason, we use ResNet-50 as the patch processing network .222Our implementation of ResNet-50 achieves 76.9 0.2 top-1 accuracy on ImageNet, which is in line with other results reported in the literature (He et al., 2016a; Zagoruyko and Komodakis, 2016; Huang et al., 2017). As described before, only the first three blocks of ResNet-50 is used. Since the goal of is to reduce any image patch into a single feature vector, we therefore perform average pooling across the spatial dimensions of the output of ResNet-36.

Efficient implementation of mask prediction.

For a more efficient use of computation, the decoder is implemented to predict multiple correct patches for multiple locations at the same time. For example, in the example above, besides finding the right patch for , the decoder also tries to find the right patch for as well as . This way, we reuse three times as much computation from the encoder-decoder architecture. Our method is, therefore, analogous to solving a jigsaw puzzle where a few patches are knocked out from the image and are required to be put back to their original locations. This procedure is demonstrated in Figure 2.

2.2 Attention Pooling

In this section, we describe in detail the attention pooling network introduced in Section 2.1 and the way positional embeddings are built for images in our work.

Transformer as pooling operation.

We make use of Transformer layers to perform pooling. Given a set of input vectors produced by applying the patch processing network on different patches, we want to pool them into a single vector

to represent the entire image. There are multiple choices at this stage including max pooling or average pooling. Here, we consider these choices special cases of the attention operation (where the softmax has a temperature approaching zero or infinity respectively) and let the network learn to pool by itself. To do this, we learn a vector

with the same dimension with ’s and feed them together through the Transformer layers:

The output corresponding to input is the pooling result. We discard .

Attention block.

Each self-attention block follows the design in BERT (Devlin et al., 2019) where self-attention layer is followed with two fully connected layers that sequentially project the input vector to an intermediate size and back to the original hidden size. The only non-linearity used is GeLU and is applied at the intermediate layer. We perform dropout with rate

on the output, followed by a residual connection connecting from the block’s input and finally layer normalization.

Positional embeddings.

For images of size , we learn a positional embedding vector for each of the 16 patches of size . Images of size , on the other hand, are divided into a grid of patches of size . Since there are significantly more positions in this case, we decompose each positional embedding into two different components: row and column embeddings. The resulting embedding is the sum of these two components. For example, instead of learning 49 positional embeddings, we only need to learn positional embeddings. This greatly reduces the number of parameters and helps with regularizing the model.

2.3 Finetuning Details

As mentioned above, in this phase, the first three convolution blocks of ResNet-50 is initialized from the pretrained patch processing network. The last convolution block of ResNet-50 is initialized by the standard initialization method. ResNet-50 is then applied on the full image and finetuned end-to-end.

3 Experiments and Results

In the following sections, we investigate the performance of our proposed pretraining method, Selfie, on standard image datasets, such as CIFAR-10 and ImageNet. To simulate the scenario when we have much more unlabeled data than labeled data, we sample small fractions of these datasets and use them as labeled datasets, while the whole dataset is used as unlabeled data for the pretraining task.

3.1 Datasets

We consider three different datasets: CIFAR-10, ImageNet resized to , and ImageNet original size (). For each of these datasets, we simulate a scenario where an additional amount of unlabeled data is available besides the labeled data used for the original supervised task. For that purpose, we create four different subsets of the supervised training data with approximately 5%, 10%, 20%, and 100% of the total number of training examples. On CIFAR-10, we replace the 10% subset with one of 4000 training examples (8%), as this setting is used in (Oliver et al., 2018; Cubuk et al., 2018). In all cases, the whole training set is used for pretraining (50K images for CIFAR-10, and 1.2M images for ImageNet).

3.2 Experimental setup

Model architecture.

We reuse all settings for ResNet convolution blocks from ResNet-50v2 including hidden sizes and initialization (He et al., 2016b)

. Batch normalization is performed at the beginning of each residual block. For self-attention layers, we apply dropout on the attention weights and before each residual connection with a drop rate of 10%. The sizes of all of our models are chosen such that each architecture has roughly 25M parameters and 50 layers, the same size and depth of a standard ResNet-50. For attention pooling, three attention blocks are added with a hidden size of

, intermediate size and attention heads on top of the patch processing network .

Model training.

Both pretraining and finetuning tasks are trained using Momentum Optimizer with Nesterov coefficient of . We use a batch size of for CIFAR-10 and for ImageNet. Learning rate is scheduled to decay in a cosine shape with a warm up phase of 100 steps and the maximum learning rate is tuned in the range of . We do not use any extra regularization besides an weight decay of magnitude . The full training is done in steps. Furthermore, as described in Section 2.1, we divide the images into non-overlapping square patches of size or during pretraining and sample a fraction of these patches to predict the remaining. We try for two values of : 75% or 50% and tune it as a hyper-parameter.

Reporting results.

For each reported experiment, we first tune its hyper-parameters by using 10% of training data as validation set and train the neural net on the remaining 90%. Once we obtain the best hyper-parameter setting, the neural network is retrained on 100% training data 5 times with different random seeds. We report the mean and standard deviation values of these five runs.

3.3 Results

We report the accuracies with and without pretraining across different labeled dataset sizes in Table 1. As can be seen from the table, Selfie yields consistent improvements in test accuracy across all three benchmarks (CIFAR-10, ImageNet , ImageNet ) with varying amounts of labeled data. Notably, on ImageNet , a gain of 11.1% in absolute accuracy is achieved when we use only 5% of the labeled data. We find the pretrained models usually converge to a higher training loss, but generalizes significantly better than model with random initialization on test set. This highlights the strong effect of regularization of our proposed pretraining procedure. An example is shown in Figure 3 when training on 10% subset of Imagenet.

Figure 3: Regularization effect of our pretraining method on the 10% subset of Imagenet . We observe the values of training loss, test loss and test accuracy during 120K steps of training, comparing a randomly initialized model to one that is initialized from a pretrained model.

Beside the gain in mean accuracy, training stability is also enhanced as evidenced by the reduction in standard deviation in almost all experiments. When the unlabeled dataset is the same with the labeled dataset (Labeled Data Percentage = 100%), the gain becomes small as expected.

Labeled Data Percentage
5% 8% 20% 100%
CIFAR-10 Supervised 75.9 0.7 79.3 1.0 88.3 0.3 95.5 0.2
Selfie Pretrained 75.9 0.4 80.3 0.3 89.1 0.5 95.7 0.1
0.0 +1.0 +0.8 +0.2
5% 10% 20% 100%
ImageNet Supervised 13.1 0.8 25.9 0.5 32.7 0.4 55.7 0.6
Selfie Pretrained 18.3 0.1 30.2 0.5 33.5 0.2 56.4 0.6
+5.2 +4.3 +0.8 +0.7
ImageNet Supervised 35.6 0.7 59.6 0.2 65.7 0.2 76.9 0.2
Selfie Pretrained 46.7 0.4 61.9 0.2 67.1 0.2 77.0 0.1
+11.1 +2.3 +1.4 +0.1
Table 1: Test accuracy (%) of ResNet-50 with and without pretraining across datasets and sizes.

Baseline Comparison.

We want to emphasize that our ResNet baselines are very strong compared to those in (He et al., 2016a). Particularly, on CIFAR-10, our ResNet with pure supervised learning on 100% labeled data achieves 95.5% in accuracy, which is better than the accuracy 94.8% achieved by DenseNet (Huang et al., 2017) and close to 95.6% obtained by Wide-ResNet (Zagoruyko and Komodakis, 2016). Likewise, on ImageNet , our baseline reaches 76.9% in accuracy, which is on par with the result reported in (He et al., 2016a), and surpasses the 76.2% accuracy of DenseNet (Huang et al., 2017). Our pretrained models further improve on our strong baselines.

Contrast to Other Works.

Notice that our classification accuracy of 77.0% on ImageNet is also significantly better than previously reported results in unsupervised representation learning (Pathak et al., 2016; Oord et al., 2018; Kolesnikov et al., 2019). For example, in a comprehensive study by (Kolesnikov et al., 2019)

, the best accuracy on ImageNet of all pretraining methods is around 55.2%, which is well below the accuracy of our models. Similarly, the best accuracy reported by Context Autoencoders 

(Pathak et al., 2016) and Contrastive Predictive Coding (Oord et al., 2018)

are 56.5% and 48.7% respectively. We suspect that such poor performance is perhaps due to the fact that past works did not finetune into the representations learned by unsupervised learning.

Concurrent to our work, there are also other attempts at using unlabeled data in semi-supervised learning settings.

Hénaff et al. (2019) showed the effectiveness of pretraining in low-data regime using cross-entropy loss with negative samples similar to our loss. However, their results are not comparable to ours because they employed a much larger network, ResNet-171, compared to the ResNet-50 architecture that we use through out this work. Consistency training with label propagation has also achieved remarkable results. For example, the recent Unsupervised Data Augmentation (Xie et al., 2019) reported 94.7% accuracy on the 8% subset of CIFAR-10. We expect that ur self-supervised pretraining method can be combined with label propagation to provide additional gains, as shown in (Zhai et al., 2019).

Finetuning on ResNet-36 + attention pooling.

In the previous experiments, we finetune ResNet-50, which is essentially ResNet-36 and one convolution block on top, dropping the attention pooling network used in pretraining. We also explore finetuning on ResNet-36 + attention pooling and find that it slightly outperforms finetuning on ResNet-50 in some cases.333We chose to use ResNet-50 for finetuning as it is faster and facilitates better comparison with past works. More in Section 4.2.

Finetuning Sensitivity and Mismatch to Pretraining.

Despite the encouraging results, we found that there are difficulties in transferring pretrained models across tasks such as from ImageNet to CIFAR. For the 100% subset of Imagenet , additional tuning of the pretraining phase using a development set is needed to achieve the result reported in Table 1. There is also a slight mismatch between our pretraining and finetuning settings: during pretraining, we process image patches independently whereas for finetuning, the model sees an image as a whole. We hope to address these concerns in subsequent works.

4 Analysis

4.1 Pretraining benefits more when there is less labeled data

In this section, we conduct further experiments to better understand our method, Selfie, especially how it performs as we decrease the amount of labeled data. To do so, we evaluate test accuracy when finetuning on 2%, 5%, 10%, 20% and 100% subset of ImageNet , as well as the accuracy with purely supervised training at each of the five marks. Similar to previous sections, we average results across five different runs for a more stable assessment. As shown in Figure 4, the ResNet mean accuracy improves drastically when there is at least an order of magnitude more unlabeled image than the labeled set (i.e., finetuning on the 10% subset). With less unlabeled data, the gain quickly diminishes. At the 20% mark there is still a slight improvement of 1.4% mean accuracy, while at the 100% mark the positive gain becomes minimal, 0.1%.

Figure 4: Pretraining with Selfie benefits the most when there is much more unlabeled data than labeled data. Left: Mean accuracy across five runs on ImageNet for purely supervised model versus one with pretraining. Right: Mean accuracy gain from pretraining. The improvement quickly diminishes at the 10% mark when there is 10 times more data than the labeled set.

4.2 Self-attention as the last layer helps finetuning performance.

As mentioned in Section 3.3, we explore training ResNet-36 + attention pooling (both are reused from pretraining phase) on CIFAR-10 and ImageNet on two settings: limited labeled data and full access to the labeled set. The architectures of the two networks are shown in Figure 5. Experimental results on these two architectures with and without pretraining are reported in Table 2.

Figure 5: (Left) ResNet-50 architecture. (Right) ResNet-36 + attention pooling architecture.
Method ResNet-50 ResNet-36 + attention pooling
CIFAR-10 8% 80.3 0.3 81.3 0.1 +1.0
ImageNet 10% 61.8 0.2 62.1 0.2 +0.3
CIFAR-10 100% 95.7 0.1 95.4 0.2 -0.3
ImageNet 100% 77.0 0.1 77.5 0.1 +0.5
Table 2: Accuracy (%) of ResNet-50 and ResNet-36 + attention pooling after finetuning from pretrained weights, found by Selfie on limited and full labeled sets. The gain () indicates how much improvement is made from using attention pooling in place of the last convolution block.

With pretraining on unlabeled data, ResNet-36 + attention pooling outperforms ResNet-50 on both datasets with limited data. On the full training set, this hybrid convolution-attention architecture gives 0.5% gain on ImageNet . These show great promise for this hybrid architecture which we plan to further explore in future work.

5 Related Work

Unsupervised representation learning for text.

Much of the success in unsupervised representation learning is in NLP. First, using language models to learn embeddings for words is commonplace in many NLP applications (Mikolov et al., 2013; Pennington et al., 2014). Building on this success, similar methods are then proposed for sentence and paragraph representations (Le and Mikolov, 2014; Kiros et al., 2015). Recent successful methods however focus on the use of language models or “masked” language models as pretraining objectives (Dai and Le, 2015; Ramachandran et al., 2017; Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019). A general principle to all of these successful methods is the idea of context prediction: given some adjacent data and their locations, predict the missing words.

Unsupervised representation learning for images.

Recent successful methods in unsupervised representation learning for images can be divided into four categories: 1) predicting rotation angle from an original image (e.g., (Gidaris et al., 2018)), 2) predicting if a perturbed image belongs to the same category with an unperturbed image (Exemplar) (e.g., (Dosovitskiy et al., 2016)), 3) predicting relative locations of patches (e.g., (Doersch et al., 2015)), solving Jigsaw puzzles (e.g., (Noroozi and Favaro, 2016)) and 4) impainting (e.g., (Huang et al., 2014; Pathak et al., 2016; Iizuka et al., 2017)). Their success, however, is limited to small datasets or small settings, some resort to expensive jointing training to surpass their purely supervised counterpart. On the challenging benchmark ImageNet, our method is the first to report gain with and without additional unlabeled data as shown in Table 1.


is also closely related to denoising autoencoders 

(Vincent et al., 2010), where various kinds of noise are applied to the input and the model is required to reconstruct the clean input. The main difference between our method and denoising autoencoders is how the reconstruction step is done: our method focuses only on the missing patches, and tries to select the right patch among other distracting patches. Our method is also related to Contrastive Predictive Coding (Oord et al., 2018), where negative sampling was also used to classify continuous objects.

Semi-supervised learning.

Semi-supervised learning is another branch of representation learning methods that take advantage of the existence of labeled data. Unlike pure unsupervised representation learning, semi-supervised learning does not need a separate fine-tuning stage to improve accuracy, which is more common in unsupervised representation learning. Successful recent semi-supervised learning methods for deep learning are based on consistency training 

(Miyato et al., 2018; Sajjadi et al., 2016; Laine and Aila, 2016; Verma et al., 2019; Xie et al., 2019).

6 Conclusion

We introduce Selfie, a self-supervised pretraining technique that generalizes the concept of masked language modeling to continuous data, such as images. Given a masked-out position of a square patch in the input image, our method learns to select the target masked patches from negative samples obtained from the same image. This classification objective therefore sidesteps the need for predicting the exact pixel values of the target patches. Experiments show that Selfie achieves significant gains when labeled set is small compared to the unlabeled set. Besides the gain in mean accuracy across different runs, the standard deviation of results is also reduced thanks to a better initialization from our pretraining method. Our analysis demonstrates the revived potential of unsupervised pretraining over supervised learning and that a hybrid convolution-attention architecture shows promise.


  • Bengio et al. (2007) Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems, pages 153–160.
  • Cubuk et al. (2018) Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2018. Autoaugment: Learning augmentation policies from data.

    Proceedings of the IEEE conference on computer vision and pattern recognition

  • Dai and Le (2015) Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems, pages 3079–3087.
  • Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Annual Conference of the North American Chapter of the Association for Computational Linguistics.
  • Doersch et al. (2015) Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1422–1430.
  • Dosovitskiy et al. (2016) Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. 2016.

    Discriminative unsupervised feature learning with exemplar convolutional neural networks.

    IEEE transactions on pattern analysis and machine intelligence, 38(9):1734–1747.
  • Gidaris et al. (2018) Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations.
  • Hannun et al. (2014) Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567.
  • He et al. (2016a) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
  • He et al. (2016b) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Identity mappings in deep residual networks. In European conference on computer vision, pages 630–645. Springer.
  • Hénaff et al. (2019) Olivier J. Hénaff, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aäron van den Oord. 2019. Data-efficient image recognition with contrastive predictive coding. CoRR, abs/1905.09272.
  • Hinton et al. (2006) Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554.
  • Howard and Ruder (2018) Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Annual Conference of the North American Chapter of the Association for Computational Linguistics.
  • Huang et al. (2017) Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708.
  • Huang et al. (2014) Jia-Bin Huang, Sing Bing Kang, Narendra Ahuja, and Johannes Kopf. 2014. Image completion using planar structure guidance. ACM Transactions on graphics (TOG), 33(4):129.
  • Iizuka et al. (2017) Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2017. Globally and locally consistent image completion. ACM Trans. Graph., 36(4):107:1–107:14.
  • Kiros et al. (2015) Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3294–3302.
  • Kolesnikov et al. (2019) Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. 2019. Revisiting self-supervised visual representation learning. arXiv preprint arXiv:1901.09005.
  • Laine and Aila (2016) Samuli Laine and Timo Aila. 2016. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations.
  • Le and Mikolov (2014) Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In

    International Conference on Machine Learning

    , pages 1188–1196.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119.
  • Miyato et al. (2018) Takeru Miyato, Shin-ichi Maeda, Shin Ishii, and Masanori Koyama. 2018. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence.
  • Noroozi and Favaro (2016) Mehdi Noroozi and Paolo Favaro. 2016. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69–84. Springer.
  • Oliver et al. (2018) Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. 2018. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pages 3235–3246.
  • Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
  • Pathak et al. (2016) Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)

    , pages 1532–1543.
  • Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Annual Conference of the North American Chapter of the Association for Computational Linguistics.
  • Raina et al. (2007) Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. 2007.

    Self-taught learning: transfer learning from unlabeled data.

    In Proceedings of the 24th International Conference on Machine Learning, pages 759–766. ACM.
  • Ramachandran et al. (2017) Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
  • Sajjadi et al. (2016) Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. 2016. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems, pages 1163–1171.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
  • Verma et al. (2019) Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. 2019. Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825.
  • Vincent et al. (2010) Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371–3408.
  • Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  • Xie et al. (2019) Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848.
  • Zagoruyko and Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. 2016. Wide residual networks. In The British Machine Vision Conference.
  • Zhai et al. (2019) Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. 2019. : Self-supervised semi-supervised learning. arXiv preprint arXiv:1905.03670.