1 Introduction
Learning useful representations from unlabeled data is a challenging problem and improvements over existing methods can have widereaching benefits. For example, consider the ubiquitous use of pretrained model components, such as word vectors
(Mikolov et al., 2013; Pennington et al., 2014) and contextsensitive encoders (Peters et al., 2018; Devlin et al., 2019), for achieving stateoftheart results on hard NLP tasks. Similarly, large convolutional networks pretrained on large supervised corpora have been widely used to improve performance across the spectrum of computer vision tasks
(Donahue et al., 2014; Ren et al., 2015; He et al., 2017; Carreira and Zisserman, 2017). Though, the necessity of pretrained networks for many vision tasks has been convincingly questioned in recent work (He et al., 2018). Nonetheless, the core motivations for unsupervised learning – namely minimizing dependence on potentially costly corpora of manually annotated data – remain strong.
We propose an approach to selfsupervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. This is analogous to a human learning to represent observations generated by a shared cause, e.g. the sights, scents, and sounds of baking, driven by a desire to predict other related observations, e.g. the taste of cookies. For a more concrete example, the shared context could be an image from the ImageNet training set, and multiple views of the context could be produced by repeatedly applying data augmentation to the image. Alternatively, one could produce multiple views of an image by repeatedly partitioning its pixels into “past” and “future” sets, with the considered partitions corresponding to a fixed autoregressive ordering, as in Contrastive Predictive Coding (CPC, van den Oord et al., 2018). The key idea is that maximizing mutual information between features extracted from multiple views of a shared context forces the features to capture information about higherlevel factors (e.g., presence of certain objects or occurrence of certain events) that broadly affect the shared context.
We introduce a model for selfsupervised representation learning based on local Deep InfoMax (DIM, Hjelm et al., 2019). Local DIM maximizes mutual information between a global summary feature vector, which depends on the full input, and a collection of local feature vectors pulled from an intermediate layer in the encoder. Our model extends local DIM in three key ways: it predicts features across independentlyaugmented versions of each input, it predicts features simultaneously across multiple scales, and it uses a more powerful encoder. Each of these modifications provides improvements over local DIM. Predicting across independentlyaugmented copies of an input and predicting at multiple scales are two simple ways of producing multiple views of the context provided by a single image. We also extend our model to mixturebased representations, and find that segmentationlike behaviour emerges as a natural sideeffect. Section 3 discusses the model and training objective in detail.
We evaluate our model using standard datasets: CIFAR10, CIFAR100, STL10 (Coates et al., 2011), ImageNet^{1}^{1}1ILSVRC2012 version (Russakovsky et al., 2015), and Places205 (Zhou et al., 2014). We evaluate performance following the protocol described by Kolesnikov et al. (2019). Our model outperforms prior work on these datasets. Our model significantly improves on existing results for STL10, reaching over 94% accuracy with linear evaluation and no encoder finetuning. On ImageNet, we reach over 68% accuracy for linear evaluation, which beats the best prior result by over 12% and the best concurrent result by 7%. We reach 55% accuracy on the Places205 task using representations learned with ImageNet data, which beats the best prior result by 7%. Section 4 discusses the experiments in detail.
2 Related Work
One characteristic which distinguishes selfsupervised learning from classic unsupervised learning is its reliance on procedurallygenerated supervised learning problems. When developing a selfsupervised learning method, one seeks to design a problem generator such that models must capture useful information about the data in order to solve the generated problems. Problems are typically generated from prior knowledge about useful structure in the data, rather than from explicit labels.
Selfsupervised learning is gaining popularity across the NLP, vision, and robotics communities – e.g., (Devlin et al., 2019; Logeswaran and Lee, 2018; Sermanet et al., 2017; Dwibedi et al., 2018). Some seminal work on selfsupervised learning for computer vision involves predicting spatial structure or color information that has been procedurally removed from the data. E.g., Doersch et al. (2015) and Noroozi and Favaro (2016) learn representations by learning to predict/reconstruct spatial structure. Zhang et al. (2016) introduce the task of predicting color information that has been removed by converting images to grayscale. Gidaris et al. (2018) propose learning representations by predicting the rotation of an image relative to a fixed reference frame, which works surprisingly well.
We approach selfsupervised learning by maximizing mutual information between features extracted from multiples views of a shared context. For example, consider maximizing mutual information between features extracted from a video with most color information removed and features extracted from the original fullcolor video. Vondrick et al. (2018) showed that object tracking can emerge as a sideeffect of optimizing this objective in the special case where the features extracted from the fullcolor video are simply the original video frames. Similarly, consider predicting how a scene would look when viewed from a particular location, given an encoding computed from several views of the scene from other locations. This task, explored by Eslami et al. (2018), requires maximizing mutual information between features from the multiview encoder and the content of the heldout view. The general goal is to distill information from the available observations such that contextuallyrelated observations can be identified among a set of plausible alternatives. Closely related work considers learning representations by predicting crossmodal correspondence (Arandjelović and Zisserman, 2017, 2018). While the mutual information bounds in (Vondrick et al., 2018; Eslami et al., 2018)
rely on explicit density estimation, our model uses the contrastive bound from CPC
(van den Oord et al., 2018), which has been further analyzed by McAllester and Stratos (2018), and Poole et al. (2019).Evaluating new selfsupervised learning methods presents some challenges. E.g., performance gains may be largely due to improvements in model architectures and training practices, rather than advances in the selfsupervised learning component. This point was addressed by Kolesnikov et al. (2019), who found massive gains in standard metrics when existing methods were reimplemented with uptodate architectures and optimized to run at larger scales. When evaluating our model, we follow their protocols and compare against their optimized results for existing methods. Some potential shortcomings with standard evaluation protocols have been noted by Goyal et al. (2019).
3 Method Description
Our model, which we call Augmented Multiscale DIM (AMDIM), extends the local version of Deep InfoMax introduced by Hjelm et al. (2019) in several ways. First, we maximize mutual information between features extracted from independentlyaugmented copies of each image, rather than between features extracted from a single, unaugmented copy of each image.^{2}^{2}2We focus on images in this paper, but the approach directly extends to, e.g.: audio, video, and text. Second, we maximize mutual information between multiple feature scales simultaneously, rather than between a single global and local scale. Third, we use a more powerful encoder architecture. Finally, we introduce mixturebased representations. We now describe local DIM and the components added by our new model.
3.1 Local DIM
Local DIM maximizes mutual information between global features , produced by a convolutional encoder , and local features , produced by an intermediate layer in . The subscript denotes features from the topmost encoder layer with spatial dimension , and the subscripts and index the two spatial dimensions of the array of activations in layer .^{3}^{3}3 refers to the layer’s spatial dimension and should not be confused with its depth in the encoder. Intuitively, this mutual information measures how much better we can guess the value of when we know the value of than when we do not know the value of . Optimizing this relative ability to predict, rather than absolute ability to predict, helps avoid degenerate representations which map all observations to similar values. Such degenerate representations perform well in terms of absolute ability to predict, but poorly in terms of relative ability to predict.
For local DIM, the terms global and local uniquely define where features come from in the encoder and how they will be used. In AMDIM, this is no longer true. So, we will refer to the features that encode the data to condition on (global features) as antecedent features, and the features to be predicted (local features) as consequent features. We choose these terms based on their role in logic.
We can construct a distribution over (antecedent, consequent) feature pairs via ancestral sampling as follows: (i) sample an input , (ii) sample spatial indices and , and (iii) compute features and . Here, is the data distribution, and
denote uniform distributions over the range of valid spatial indices into the relevant encoder layer. We denote the marginal distributions over perlayer features as
and .Given , , and , local DIM seeks an encoder that maximizes the mutual information in .
3.2 NoiseContrastive Estimation
The best results with local DIM were obtained using a mutual information bound based on NoiseContrastive Estimation (NCE –
(Gutmann and Hyvärinen, 2010)), as used in various NLP applications (Ma and Collins, 2018), and applied to infomax objectives by van den Oord et al. (2018). This class of bounds has been studied in more detail by McAllester and Stratos (2018), and Poole et al. (2019).We can maximize the NCE lower bound on by minimizing the following loss:
(1) 
The positive sample pair
is drawn from the joint distribution
. denotes a set of negative samples, comprising many “distractor” consequent features drawn independently from the marginal distribution . Intuitively, the task of the antecedent feature is to pick its true consequent out of a large bag of distractors. The loss is a standard logsoftmax, where the normalization is over a large set of matching scores . Roughly speaking, maps (antecedent, consequent) feature pairs onto scalarvalued scores, where higher scores indicate higher likelihood of a positive sample pair. We can write as follows:(2) 
where we omit spatial indices and dependence on for brevity. Training in local DIM corresponds to minimizing the loss in Eqn. 1 with respect to and
, which we assume to be represented by parametric function approximators, e.g. deep neural networks.
3.3 Efficient NCE Computation
We can efficiently compute the bound in Eqn. 1 for many positive sample pairs, using large negative sample sets, e.g. , by using a simple dot product for the matching score :
(3) 
The functions
nonlinearly transform their inputs to some other vector space. Given a sufficiently highdimensional vector space, in principle we should be able to approximate any (reasonable) class of functions we care about – which correspond to belief shifts like
in our case – via linear evaluation. The power of linear evaluation in highdimensional spaces can be understood by considering Reproducing Kernel Hilbert Spaces (RKHS). One weakness of this approach is that it limits the rank of the set of belief shifts our model can represent when the vector space is finitedimensional, as was previously addressed in the context of language modeling by introducing mixtures (Yang et al., 2018). We provide pseudocode for the NCE bound in Figure 4.When training with larger models on more challenging datasets, i.e. STL10 and ImageNet, we use some tricks to mitigate occasional instability in the NCE cost. The first trick is to add a weighted regularization term that penalizes the squared matching scores like: . We use NCE regularization weight for all experiments. The second trick is to apply a soft clipping nonlinearity to the scores after computing the regularization term and before computing the logsoftmax in Eqn. 2. For clipping score to range , we applied the nonlinearity , which is linear around 0 and saturates as one approaches . We use
for all experiments. We suspect there may be interesting formal and practical connections between regularization that restricts the variance/range/etc of scores that go into the NCE bound, and things like the KL/information cost in Variational Autoencoders
(Kingma and Welling, 2013).consequents per image. For each true (antecedent, consequent) positive sample pair, we compute the NCE bound using all consequents associated with all other antecedents as negative samples. Our pseudocode is roughly based on pytorch. We use dynamic programming in the logsoftmax normalizations required by
. (c)bottom: Our ImageNet encoder architecture.3.4 Data Augmentation
Our model extends local DIM by maximizing mutual information between features from augmented views of each input. We describe this with a few minor changes to our notation for local DIM. We construct the augmented feature distribution as follows: (i) sample an input , (ii) sample augmented images and , (iii) sample spatial indices and , (iv) compute features and . We use to denote the distribution of images generated by applying stochastic data augmentation to . For this paper, we apply some standard data augmentations: random resized crop, random jitter in color space, and random conversion to grayscale. We apply a random horizontal flip to before computing and .
3.5 Multiscale Mutual Information
Our model further extends local DIM by maximizing mutual information across multiple feature scales. Consider features taken from position in the topmost layer of with spatial dimension . Using the procedure from the preceding subsection, we can construct joint distributions over pairs of features from any position in any layer like: , , or .
We can now define a family of to infomax costs:
(5) 
where denotes a set of independent samples from the marginal over features from the topmost layer in . For the experiments in this paper we maximize mutual information from 1to5, 1to7, and 5to5. We uniformly sample locations for both features in each positive sample pair. These costs may look expensive to compute at scale, but it is actually straightforward to efficiently compute Monte Carlo approximations of the relevant expectations using many samples in a single pass through the encoder for each batch of pairs. Figure 4b illustrates our full model, which we call Augmented Multiscale Deep InfoMax (AMDIM).
3.6 Encoder
Our model uses an encoder based on the standard ResNet (He et al., 2016a, b)
, with changes to make it suitable for DIM. Our main concern is controlling receptive fields. When the receptive fields for features in a positive sample pair overlap too much, the task becomes too easy and the model performs worse. Another concern is keeping the feature distributions stationary by avoiding padding.
The encoder comprises a sequence of blocks, with each block comprising multiple residual layers. The first layer in each block applies mean pooling with kernel width
and stride
to compute a base output, and computes residuals to add to the base output using a convolution with kernel width and stride, followed by a ReLU and then a
convolution, i.e. . Subsequent layers in the block are standard residual layers. The mean pooling compensates for not using padding, and the layers control receptive field growth. Exhaustive details can be found in our code: https://github.com/PhilipBachman/amdimpublic. We train our models using 48 standard Tesla V100 GPUs per model. Other recent, strong selfsupervised models are nonreproducible on standard hardware.We use the encoder architecture in Figure 4c when working with ImageNet and Places205. We use input for these datasets due to resource constraints. The argument order for Conv2d is (input dim, output dim, kernel width, stride, padding). The argument order for ResBlock is the same as Conv2d, except the last argument (i.e. ndepth) gives block depth rather than padding. Parameters ndf and nrkhs determine encoder feature dimension and output dimension for the embedding functions . The embeddings and are computed by applying a small MLP via convolution. We use similar architectures for the other datasets, with minor changes to account for input sizes.
3.7 MixtureBased Representations
We now extend our model to use mixturebased features. For each antecedent feature , we compute a set of mixture features , where is the number of mixture components. We compute these features using a function : . We represent
using a fullyconnected network with a single ReLU hidden layer and a residual connection between
and each mixture feature . When using mixture features, we maximize the following objective:(6) 
For each augmented image pair , we extract mixture features and consequent features . denotes the NCE score between and , computed as described in Figure 4c. This score gives the logsoftmax term for the mutual information bound in Eqn. 2. We also add an entropy maximization term .
In practice, given the scores assigned to consequent feature by the mixture features , we can compute the optimal distribution as follows:
(7) 
where is a temperature parameter that controls the entropy of . We motivate Eqn. 7
by analogy to Reinforcement Learning. Given the scores
, we could define using an indicator of the maximum score. But, when depends on the stochastic scores this choice will be overoptimistic in expectation, since it will be biased towards scores which are pushed up by the stochasticity (which comes from sampling negative samples). Rather than take a maximum, we encourage to be less greedy by adding the entropy maximization term . For any value of in Eqn. 6, there exists a value of in Eqn. 7 such that computing using Eqn. 7 provides an optimal with respect to Eqn. 6. This directly relates to the formulation of optimal Boltzmanntype policies in the context of Soft Q Learning. See, e.g. Haarnoja et al. (2017). In practice, we treatas a hyperparameter.
4 Experiments
We evaluate our model on standard benchmarks for selfsupervised visual representation learning. We use CIFAR10, CIFAR100, STL10, ImageNet, and Places205. To measure performance, we first train an encoder using all examples from the training set (sans labels), and then train linear and MLP classifiers on top of the encoder features
(sans backprop into the encoder). The final performance metric is the accuracy of these classifiers. This follows the evaluation protocol described by Kolesnikov et al. (2019). Our model outperforms prior work on these datasets.On CIFAR10 and CIFAR100 we trained small models with size parameters: (ndf=128, nrkhs=1024, ndepth=10), and large models with size parameters: (ndf=320, nrkhs=1280, ndepth=10). On CIFAR10, the large model reaches 91.2% accuracy with linear evaluation and 93.1% accuracy with MLP evaluation. On CIFAR100, it reaches 70.2% and 72.8%. These are comparable with slightly older fullysupervised models, and well ahead of other work on selfsupervised feature learning. See Table 2
for a comparison with standard fullysupervised models. On STL10, using size parameters: (ndf=192, nrkhs=1536, ndepth=8), our model significantly improves on prior selfsupervised results. STL10 was originally intended to test semisupervised learning methods, and comprises 10 classes with a total of 5000 labeled examples. Strong results have been achieved on STL10 via semisupervised learning, which involves finetuning some of the encoder parameters using the available labeled data. Examples of such results include
(Ji et al., 2019) and (Berthelot et al., 2019), which achieve 88.8% and 94.4% accuracy respectively. Our model reaches 94.2% accuracy on STL10 with linear evaluation, which compares favourably with semisupervised results that finetune the encoder using the labeled data.On ImageNet, using a model with size parameters: (ndf=320, nrkhs=2536, ndepth=10), and a batch size of 1008, we reach 68.1% accuracy for linear evaluation, beating the best prior result by over 12% and the best concurrent results by 7% (Kolesnikov et al., 2019; Hénaff et al., 2019; Tian et al., 2019). Our model is significantly smaller than the models which produced those results and is reproducible on standard hardware. Using MLP evaluation, our model reaches 69.5% accuracy. Our linear and MLP evaluation results on ImageNet both surpass the original AlexNet trained endtoend by a large margin. Table 3 provides results from single ablation tests on STL10 and ImageNet. We perform ablations on individual aspects of data augmentation and on the use of multiscale feature learning and NCE cost regularization. See Table 1 for a comparison with welloptimized results for prior and concurrent models. We also tested our model on an ImagenetPlaces205 transfer task, which involves training the encoder on ImageNet and then training the evaluation classifiers on the Places205 data. Our model also beat prior results on that task. Performance with the transferred features is close to that of features learned on the Places205 data. See Table 1.
We include additional visualizations of model behaviour in Figure 16. See the figure caption for more information. Briefly, though our model generally performs well, it does exhibit some characteristic weaknesses that provide interesting subjects for future work. Intriguingly, when we incorporate mixturebased representations, segmentation behaviour emerges as a natural sideeffect. The mixturebased model is more sensitive to hyperparameters, and we have not had time to tune it for ImageNet. However, the qualitative behaviour on STL10 is exciting and we observe roughly a 1% boost in performance with a simple bagoffeatures approach for using the mixture features during evaluation.
Visualizing behaviour of AMDIM. (a) and (b) combine two things – KNN retrieval based on cosine similarity between features
, and the matching scores (i.e., ) between features and . (a) is from ImageNet and (b) is from Places205. Each leftmost column shows a query image, whose was used to retrieve 7 most similar images. For each query, we visualize similarity between its and the s from the retrieved images. On ImageNet, we see that good retrieval is often based on similarity focused on the main object, while poor retrieval depends more on background similarity. The pattern is more diffuse for Places205. (c) and (d) visualize the data augmentation that produces paired images and , and three types of similarity: between and , between and , and between and . (e, f, g, h): we visualize models trained on STL10 with 2, 3, 3, and 4 components in the toplevel mixtures. For each (left) and (right), the mixture components were inferred from and we visualize the posteriors over those components for the features from . We compute the posteriors as described in Section 3.7.5 Discussion
We introduced an approach to selfsupervised learning based on maximizing mutual information between arbitrary features extracted from multiple views of a shared context. Following this approach, we developed a model called Augmented Multiscale Deep InfoMax (AMDIM), which improves on prior results while remaining computationally practical. Our approach extends to a variety of domains, including video, audio, text, etc. E.g., we expect that capturing natural relations using multiple views of local spatiotemporal contexts in video could immediately improve our model.
Worthwhile subjects for future research include: modifying the AMDIM objective to work better with standard architectures, improving scalability and running on better infrastructure, further work on mixturebased representations, and examining (formally and empirically) the role of regularization in the NCEbased mutual information bound. We believe contrastive selfsupervised learning has a lot to offer, and that AMDIM represents a particularly effective approach.
References
 Arandjelović and Zisserman [2017] Relja Arandjelović and Andrew Zisserman. Look, listen and learn. International Conference on Computer Vision (ICCV), 2017.
 Arandjelović and Zisserman [2018] Relja Arandjelović and Andrew Zisserman. Objects that sound. European Conference on Computer Vision (ECCV), 2018.
 Berthelot et al. [2019] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semisupervised learning. arXiv:1905.02249 [cs.LG], 2019.

Carreira and Zisserman [2017]
Joao Carreira and Andrew Zisserman.
Quo vadis, action recognition? a new model and the kinetics dataset.
Conference on Computer Vision and Pattern Recognition (CVPR)
, 2017. 
Coates et al. [2011]
Adam Coates, Honglak Lee, and Andrew Y Ng.
An analysis of single layer networks in unsupervised feature
learning.
International Conference on Artificial Intelligence and Statistics (AISTATS)
, 2011.  Devlin et al. [2019] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
 Doersch and Zisserman [2017] Carl Doersch and Andrew Zisserman. Multitask selfsupervised visual learning. International Conference on Computer Vision (ICCV), 2017.
 Doersch et al. [2015] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. International Conference on Computer Vision (ICCV), 2015.

Donahue et al. [2014]
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric
Tzeng, and Trevor Darrell.
Decaf: A deep convolutional activation feature for generic visual
recognition.
International Conference on Machine Learning (ICML)
, 2014. 
Dosovitskiy et al. [2014]
Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, and Thomas Brox
Martin Riedmiller.
Discriminative unsupervised feature learning with exemplar convolutional neural networks.
Advances in Neural Information Processing Systems (NIPS), 2014.  Dwibedi et al. [2018] Debidatta Dwibedi, Jonathan Tompson, Corey Lynch, and Pierre Sermanet. Learning actionable representations from visual observations. International Conference on Intelligent Robots and Systems (IROS), 2018.

Eslami et al. [2018]
SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Aria S
Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol
Gregor, David P Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan
Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan
Wierstra, Koray Kavukcuoglu, and Demis Hassabis.
Neural scene representation and rendering.
Science, 2018.  Gidaris et al. [2018] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. International Conference on Learning Representations (ICLR), 2018.
 Goyal et al. [2019] Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking selfsupervised visual representation learning. arXiv:1905.01235 [cs.CV], 2019.
 Gutmann and Hyvärinen [2010] Michael Gutmann and Aapo Hyvärinen. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
 Haarnoja et al. [2017] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energybased policies. International Conference on Machine Learning (ICML), 2017.
 He et al. [2016a] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Conference on Computer Vision and Pattern Recognition (CVPR), 2016a.
 He et al. [2016b] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. European Conference on Computer Vision (ECCV), 2016b.
 He et al. [2017] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask rcnn. International Conference on Computer Vision (ICCV), 2017.
 He et al. [2018] Kaiming He, Ross Girshick, and Piotr Dollár. Rethinking imagenet pretrainining. arXiv:1811.08883 [vs.CV], 2018.
 Hénaff et al. [2019] Olivier J Hénaff, Ali Rezavi, Carl Doersch, S M Ali Eslami, and Aaron van den Oord. Dataefficient image recognition with contrastive predictive coding. arXiv:1905.09272 [cs.LG], 2019.
 Hjelm et al. [2019] R Devon Hjelm, Alex Fedorov, Samuel LavoieMarchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. International Conference on Learning Representations (ICLR), 2019.
 Ji et al. [2019] Xu Ji, Jo ao F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. arXiv:1807.06653, 2019.
 Kingma and Welling [2013] Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv:1312.6114, 2013.
 Kolesnikov et al. [2019] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting selfsupervised learning. arXiv:1901.09005, 2019.
 Lim et al. [2019] Sungbim Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. arXiv:1905.00397 [cs.LG], 2019.
 Logeswaran and Lee [2018] Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence representations. International Conference on Learning Representations (ICLR), 2018.

Ma and Collins [2018]
Zhuang Ma and Michael Collins.
Noisecontrastive estimation and negative sampling for conditional
models: Consistency and statistical efficiency.
Conference on Empirical Methods in Natural Language Processing (EMNLP)
, 2018.  McAllester and Stratos [2018] David McAllester and Karl Stratos. Formal limitations on the measurement of mutual information. arXiv:1811.04251 [cs.IT], 2018.
 Mikolov et al. [2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems (NIPS), 2013.
 Noroozi and Favaro [2016] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. European Conference on Computer Vision (ECCV), 2016.
 Pennington et al. [2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. Empirical Methods in Natural Language Processing (EMNLP), 2014.
 Peters et al. [2018] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2018.
 Poole et al. [2019] Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. International Conference on Machine Learning (ICML), 2019.
 Ren et al. [2015] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster rcnn: Towards realtime object detection with region proposal networks. Advances in Neural Information Processing Systems (NIPS), 2015.
 Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, and Li FeiFei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 2015.
 Sermanet et al. [2017] Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Timecontrastive networks: Selfsupervised learning from video. International Conference on Robotics and Automation (ICRA), 2017.
 Srivastava et al. [2015] Rupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks. Advances in Neural Information Processing Systems (NIPS), 2015.
 Tian et al. [2019] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv:1906.05849 [cs.LG], 2019.
 van den Oord et al. [2018] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Contrastive predictive coding. arXiv:1807.03748, 2018.
 Vondrick et al. [2018] Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, and Kevin Murphy. Tracking emerges by colorizing videos. European Conference on Computer Vision (ECCV), 2018.
 Yang et al. [2018] Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the softmax bottleneck: A highrank rnn language model. International Conference on Learning Representations (ICLR), 2018.
 Zagoruyko and Komodakis [2016] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Conference (BMVC), 2016.

Zhang et al. [2016]
Richard Zhang, Phillip Isola, and Alexei A Efros.
Colorful image colorization.
In European conference on computer vision, pages 649–666. Springer, 2016. 
Zhou et al. [2014]
Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Audo Oliva.
Learning deep features for scene recognition using places database.
Advances in Neural Information Processing Systems (NIPS), 2014.