Representation Learning with Contrastive Predictive Coding

07/10/2018 ∙ by Aaron van den Oord, et al. ∙ Google 0

While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning high-level representations from labeled data with layered differentiable models in an end-to-end fashion is one of the biggest successes in artificial intelligence so far. These techniques made manually specified features largely redundant and have greatly improved state-of-the-art in several real-world applications krizhevsky2012imagenet ; hinton2012deep ; sutskever2014sequence . However, many challenges remain, such as data efficiency, robustness or generalization.

Improving representation learning requires features that are less specialized towards solving a single supervised task. For example, when pre-training a model to do image classification, the induced features transfer reasonably well to other image classification domains, but also lack certain information such as color or the ability to count that are irrelevant for classification but relevant for e.g. image captioning showandtell . Similarly, features that are useful to transcribe human speech may be less suited for speaker identification, or music genre prediction. Thus, unsupervised learning is an important stepping stone towards robust and generic representation learning.

Despite its importance, unsupervised learning is yet to see a breakthrough similar to supervised learning: modeling high-level representations from raw observations remains elusive. Further, it is not always clear what the ideal representation is and if it is possible that one can learn such a representation without additional supervision or specialization to a particular data modality.

One of the most common strategies for unsupervised learning has been to predict future, missing or contextual information. This idea of predictive coding elias1955predictive ; atal1970adaptive is one of the oldest techniques in signal processing for data compression. In neuroscience, predictive coding theories suggest that the brain predicts observations at various levels of abstraction rao1999predictive ; friston2005theory . Recent work in unsupervised learning has successfully used these ideas to learn word representations by predicting neighboring words mikolov2013efficient . For images, predicting color from grey-scale or the relative position of image patches has also been shown useful zhang2016colorful ; Doersch_2015_ICCV . We hypothesize that these approaches are fruitful partly because the context from which we predict related values are often conditionally dependent on the same shared high-level latent information. And by casting this as a prediction problem, we automatically infer these features of interest to representation learning.

In this paper we propose the following: first, we compress high-dimensional data into a much more compact latent embedding space in which conditional predictions are easier to model. Secondly, we use powerful autoregressive models in this latent space to make predictions many steps in the future. Finally, we rely on Noise-Contrastive Estimation

gutmann2010noise

for the loss function in similar ways that have been used for learning word embeddings in natural language models, allowing for the whole model to be trained end-to-end. We apply the resulting model, Contrastive Predictive Coding (CPC) to widely different data modalities, images, speech, natural language and reinforcement learning, and show that the same mechanism learns interesting high-level information on each of these domains, outperforming other approaches.

Figure 1: Overview of Contrastive Predictive Coding, the proposed representation learning approach. Although this figure shows audio as input, we use the same setup for images, text and reinforcement learning.

2 Contrastive Predicting Coding

We start this section by motivating and giving intuitions behind our approach. Next, we introduce the architecture of Contrastive Predictive Coding (CPC). After that we explain the loss function that is based on Noise-Contrastive Estimation. Lastly, we discuss related work to CPC.

2.1 Motivation and Intuitions

The main intuition behind our model is to learn the representations that encode the underlying shared information between different parts of the (high-dimensional) signal. At the same time it discards low-level information and noise that is more local. In time series and high-dimensional modeling, approaches that use next step prediction exploit the local smoothness of the signal. When predicting further in the future, the amount of shared information becomes much lower, and the model needs to infer more global structure. These ’slow features’ wiskott2002slow that span many time steps are often more interesting (e.g., phonemes and intonation in speech, objects in images, or the story line in books.).

One of the challenges of predicting high-dimensional data is that unimodal losses such as mean-squared error and cross-entropy are not very useful, and powerful conditional generative models which need to reconstruct every detail in the data are usually required. But these models are computationally intense, and waste capacity at modeling the complex relationships in the data , often ignoring the context . For example, images may contain thousands of bits of information while the high-level latent variables such as the class label contain much less information (10 bits for 1,024 categories). This suggests that modeling directly may not be optimal for the purpose of extracting shared information between and . When predicting future information we instead encode the target (future) and context

(present) into a compact distributed vector representations (via non-linear learned mappings) in a way that maximally preserves the mutual information of the original signals

and defined as

(1)

By maximizing the mutual information between the encoded representations (which is bounded by the MI between the input signals), we extract the underlying latent variables the inputs have in commmon.

2.2 Contrastive Predictive Coding

Figure 1 shows the architecture of Contrastive Predictive Coding models. First, a non-linear encoder maps the input sequence of observations to a sequence of latent representations , potentially with a lower temporal resolution. Next, an autoregressive model summarizes all in the latent space and produces a context latent representation .

As argued in the previous section we do not predict future observations directly with a generative model . Instead we model a density ratio which preserves the mutual information between and (Equation 1) as follows (see next sub-section for further details):

(2)

where stands for ’proportional to’ (i.e. up to a multiplicative constant). Note that the density ratio can be unnormalized (does not have to integrate to 1). Although any positive real score can be used here, we use a simple log-bilinear model:

(3)

In our experiments a linear transformation

is used for the prediction with a different for every step

. Alternatively, non-linear networks or recurrent neural networks could be used.

By using a density ratio and inferring with an encoder, we relieve the model from modeling the high dimensional distribution . Although we cannot evaluate or directly, we can use samples from these distributions, allowing us to use techniques such as Noise-Contrastive Estimation gutmann2010noise ; mnih2012fast ; jozefowicz2016exploring and Importance Sampling bengio2008is that are based on comparing the target value with randomly sampled negative values.

In the proposed model, either of and could be used as representation for downstream tasks. The autoregressive model output can be used if extra context from the past is useful. One such example is speech recognition, where the receptive field of might not contain enough information to capture phonetic content. In other cases, where no additional context is required, might instead be better. If the downstream task requires one representation for the whole sequence, as in e.g. image classification, one can pool the representations from either or over all locations.

Finally, note that any type of encoder and autoregressive model can be used in the proposed framework. For simplicity we opted for standard architectures such as strided convolutional layers with resnet blocks for the encoder, and GRUs

cho2014learning for the autoregresssive model. More recent advancements in autoregressive modeling such as masked convolutional architectures oord2016wavenet ; aaron2016pixelcnn or self-attention networks attentionNIPS2017 could help improve results further.

2.3 InfoNCE Loss and Mutual Information Estimation

Both the encoder and autoregressive model are trained to jointly optimize a loss based on NCE, which we will call InfoNCE. Given a set of random samples containing one positive sample from and negative samples from the ’proposal’ distribution , we optimize:

(4)

Optimizing this loss will result in estimating the density ratio in equation 2. This can be shown as follows.

The loss in Equation 4

is the categorical cross-entropy of classifying the positive sample correctly, with

being the prediction of the model. Let us write the optimal probability for this loss as

with being the indicator that sample is the ’positive’ sample. The probability that sample was drawn from the conditional distribution rather than the proposal distribution can be derived as follows:

(5)

As we can see, the optimal value for in Equation 4 is proportional to and this is independent of the the choice of the number of negative samples .

Though not required for training, we can evaluate the mutual information between the variables and as follows:

which becomes tighter as N becomes larger. Also observe that minimizing the InfoNCE loss maximizes a lower bound on mutual information. For more details see Appendix.

2.4 Related Work

CPC is a new method that combines predicting future observations (predictive coding) with a probabilistic contrastive loss (Equation 4). This allows us to extract slow features, which maximize the mutual information of observations over long time horizons. Contrastive losses and predictive coding have individually been used in different ways before, which we will now discuss.

Contrastive loss functions have been used by many authors in the past. For example, the techniques proposed by chopra2005learning ; weinberger2009distance ; schroff2015facenet were based on triplet losses using a max-margin approach to separate positive from negative examples. More recent work includes Time Contrastive Networks sermanet2017time which proposes to minimize distances between embeddings from multiple viewpoints of the same scene and whilst maximizing distances between embeddings extracted from different timesteps. In Time Contrastive Learning NIPS2016_6395 a contrastive loss is used to predict the segment-ID of multivariate time-series as a way to extract features and perform nonlinear ICA.

There has also been work and progress on defining prediction tasks from related observations as a way to extract useful representations, and many of these have been applied to language. In Word2Vec mikolov2013efficient neighbouring words are predicted using a contrastive loss. Skip-thought vectors kiros2015skip and Byte mLSTM radford2017learning

are alternatives which go beyond word prediction with a Recurrent Neural Network, and use maximum likelihood over sequences of observations. In Computer Vision

wang2015unsupervised use a triplet loss on tracked video patches so that patches from the same object at different timesteps are more similar to each other than to random patches. Doersch_2015_ICCV ; noroozi2016unsupervised propose to predict the relative postion of patches in an image and in zhang2016colorful color values are predicted from a greyscale images.

3 Experiments

We present benchmarks on four different application domains: speech, images, natural language and reinforcement learning. For every domain we train CPC models and probe what the representations contain with either a linear classification task or qualitative evaluations, and in reinforcement learning we measure how the auxiliary CPC loss speeds up learning of the agent.

3.1 Audio

Figure 2: t-SNE visualization of audio (speech) representations for a subset of 10 speakers (out of 251). Every color represents a different speaker.
Figure 3: Average accuracy of predicting the positive sample in the contrastive loss for 1 to 20 latent steps in the future of a speech waveform. The model predicts up to 200ms in the future as every step consists of 10ms of audio.
Method ACC Phone classification Random initialization 27.6 MFCC features 39.7 CPC 64.6 Supervised 74.6 Speaker classification Random initialization 1.87 MFCC features 17.6 CPC 97.4 Supervised 98.5
Table 1: LibriSpeech phone and speaker classification results. For phone classification there are 41 possible classes and for speaker classification 251. All models used the same architecture and the same audio input sizes.
Method ACC #steps predicted 2 steps 28.5 4 steps 57.6 8 steps 63.6 12 steps 64.6 16 steps 63.8 Negative samples from Mixed speaker 64.6 Same speaker 65.5 Mixed speaker (excl.) 57.3 Same speaker (excl.) 64.6 Current sequence only 65.2
Table 2: LibriSpeech phone classification ablation experiments. More details can be found in Section 3.1.

For audio, we use a 100-hour subset of the publicly available LibriSpeech dataset panayotov2015librispeech . Although the dataset does not provide labels other than the raw text, we obtained force-aligned phone sequences with the Kaldi toolkit povey2011kaldi and pre-trained models on Librispeech111www.kaldi-asr.org/downloads/build/6/trunk/egs/librispeech/. We have made the aligned phone labels and our train/test split available for download on Google Drive222https://drive.google.com/drive/folders/1BhJ2umKH3whguxMwifaKtSra0TgAbtfb. The dataset contains speech from 251 different speakers.

The encoder architecture

used in our experiments consists of a strided convolutional neural network that runs directly on the 16KHz PCM audio waveform. We use five convolutional layers with strides [5, 4, 2, 2, 2], filter-sizes [10, 8, 4, 4, 4] and 512 hidden units with ReLU activations. The total downsampling factor of the network is 160 so that there is a feature vector for every 10ms of speech, which is also the rate of the phoneme sequence labels obtained with Kaldi. We then use a GRU RNN

cho2014learning for the autoregressive part of the model, with 256 dimensional hidden state. The output of the GRU at every timestep is used as the context from which we predict 12 timesteps in the future using the contrastive loss. We train on sampled audio windows of length 20480. We use the Adam optimizer kingma2014adam with a learning rate of 2e-4, and use 8 GPUs each with a minibatch of 8 examples from which the negative samples in the contrastive loss are drawn. The model is trained until convergence, which happens roughly at 300,000 updates.

Figure 3

shows the accuracy of the model to predict latents in the future, from 1 to 20 timesteps. We report the average number of times the logit for the positive sample is higher than for the negative samples in the probabilistic contrastive loss. This figure also shows that the objective is neither trivial nor impossible, and as expected the prediction task becomes harder as the target is further away.

To understand the representations extracted by CPC, we measure the phone prediction performance with a linear classifier trained on top of these features, which shows how linearly separable the relevant classes are under these features. We extract the outputs of the GRU (256 dimensional), i.e.

, for the whole dataset after model convergence and train a multi-class linear logistic regression classifier. The results are shown in Table

2 (top). We compare the accuracy with three baselines: representations from a random initialized model (i.e., and are untrained), MFCC features, and a model that is trained end-to-end supervised with the labeled data. These two models have the same architecture as the one used to extract the CPC representations. The fully supervised model serves as an indication for what is achievable with this architecture. We also found that not all the information encoded is linearly accessible. When we used a single hidden layer instead the accuracy increases from 64.6 to 72.5, which is closer to the accuracy of the fully supervised model.

Table 2 gives an overview of two ablation studies of CPC for phone classification. In the first set we vary the number of steps the model predicts showing that predicting multiple steps is important for learning useful features. In the second set we compare different strategies for drawing negative sample, all predicting 12 steps (which gave the best result in the first ablation). In the mixed speaker experiment the negative samples contain examples of different speakers (first row), in contrast to same speaker experiment (second row). In the third and fourth experiment we exclude the current sequence to draw negative samples from (so only other examples in the minibatch are present in ) and in the last experiment we only draw negative samples within the sequence (thus all samples are from the same speaker).

Beyond phone classification, Table 2 (bottom) shows the accuracy of performing speaker identity (out of 251) with a linear classifier from the same representation (we do not average utterances over time). Interestingly, CPCs capture both speaker identity and speech contents, as demonstrated by the good accuracies attained with a simple linear classifier, which also gets close to the oracle, fully supervised networks.

Additionally, Figure 3 shows a t-SNE visualization maaten2008visualizing of how discriminative the embeddings are for speaker voice-characteristics. It is important to note that the window size (maximum context size for the GRU) has a big impact on the performance, and longer segments would give better results. Our model had a maximum of 20480 timesteps to process, which is slightly longer than a second.

3.2 Vision

Figure 4: Visualization of Contrastive Predictive Coding for images (2D adaptation of Figure 1).
Figure 5:

Every row shows image patches that activate a certain neuron in the CPC architecture.

In our visual representation experiments we use the ILSVRC ImageNet competition dataset

ILSVRC15 . The ImageNet dataset has been used to evaluate unsupervised vision models by many authors wang2015unsupervised ; Doersch_2015_ICCV ; donahue2016adversarial ; zhang2016colorful ; noroozi2016unsupervised ; doersch2017multi . We follow the same setup as doersch2017multi and use a ResNet v2 101 architecture he2016identity as the image encoder to extract CPC representations (note that this encoder is not pretrained). We did not use Batch-Norm ioffe2015batch . After unsupervised training, a linear layer is trained to measure classification accuracy on ImageNet labels.

The training procedure is as follows: from a 256x256 image we extract a 7x7 grid of 64x64 crops with 32 pixels overlap. Simple data augmentation proved helpful on both the 256x256 images and the 64x64 crops. The 256x256 images are randomly cropped from a 300x300 image, horizontally flipped with a probability of 50% and converted to greyscale. For each of the 64x64 crops we randomly take a 60x60 subcrop and pad them back to a 64x64 image.

Each crop is then encoded by the ResNet-v2-101 encoder. We use the outputs from the third residual block, and spatially mean-pool to get a single 1024-d vector per 64x64 patch. This results in a 7x7x1024 tensor. Next, we use a PixelCNN-style autoregressive model

aaron2016pixelcnn (a convolutional row-GRU PixelRNN aaron2016pixelrnn gave similar results) to make predictions about the latent activations in following rows top-to-bottom, visualized in Figure 4. We predict up to five rows from the 7x7 grid, and we apply the contrastive loss for each patch in the row. We used Adam optimizer with a learning rate of 2e-4 and trained on 32 GPUs each with a batch size of 16.

For the linear classifier trained on top of the CPC features we use SGD with a momentum of 0.9, a learning rate schedule of 0.1, 0.01 and 0.001 for 50k, 25k and 10k updates and batch size of 2048 on a single GPU. Note that when training the linear classifier we first spatially mean-pool the 7x7x1024 representation to a single 1024 dimensional vector. This is slightly different from doersch2017multi which uses a 3x3x1024 representation without pooling, and thus has more parameters in the supervised linear mapping (which could be advantageous).

Tables 4 and 4 show the top-1 and top-5 classification accuracies compared with the state-of-the-art. Despite being relatively domain agnostic, CPCs improve upon state-of-the-art by 9% absolute in top-1 accuracy, and 4% absolute in top-5 accuracy.

Method Top-1 ACC Using AlexNet conv5 Video wang2015unsupervised 29.8 Relative Position Doersch_2015_ICCV 30.4 BiGan donahue2016adversarial 34.8 Colorization zhang2016colorful 35.2 Jigsaw noroozi2016unsupervised * 38.1 Using ResNet-V2 Motion Segmentation doersch2017multi 27.6 Exemplar doersch2017multi 31.5 Relative Position doersch2017multi 36.2 Colorization doersch2017multi 39.6 CPC 48.7
Table 3: ImageNet top-1 unsupervised classification results. *Jigsaw is not directly comparable to the other AlexNet results because of architectural differences.
Method Top-5 ACC Motion Segmentation (MS) 48.3 Exemplar (Ex) 53.1 Relative Position (RP) 59.2 Colorization (Col) 62.5 Combination of  MS + Ex + RP + Col 69.3 CPC 73.6
Table 4: ImageNet top-5 unsupervised classification results. Previous results with MS, Ex, RP and Col were taken from doersch2017multi and are the best reported results on this task.

3.3 Natural Language

Method MR CR Subj MPQA TREC
Paragraph-vector le2014distributed 74.8 78.1 90.5 74.2 91.8
Skip-thought vector kiros2015skip 75.5 79.3 92.1 86.9 91.4
Skip-thought + LN ba2016layernorm 79.5 82.6 93.4 89.0 -
CPC 76.9 80.1 91.2 87.7 96.8
Table 5:

Classification accuracy on five common NLP benchmarks. We follow the same transfer learning setup from Skip-thought vectors

kiros2015skip and use the BookCorpus dataset as source. le2014distributed is an unsupervised approach to learning sentence-level representations. kiros2015skip is an alternative unsupervised learning approach. ba2016layernorm is the same skip-thought model with layer normalization trained for 1M iterations.

Our natural language experiments follow closely the procedure from kiros2015skip which was used for the skip-thought vectors model. We first learn our unsupervised model on the BookCorpus dataset zhu2015aligning , and evaluate the capability of our model as a generic feature extractor by using CPC representations for a set of classification tasks. To cope with words that are not seen during training, we employ vocabulary expansion the same way as kiros2015skip , where a linear mapping is constructed between word2vec and the word embeddings learned by the model.

For the classification tasks we used the following datasets: movie review sentiment (MR) pang2005seeing , customer product reviews (CR) hu2004mining , subjectivity/objectivity pang2004sentimental , opinion polarity (MPQA) wiebe2005annotating and question-type classification (TREC) li2002learning . As in kiros2015skip we train a logistic regression classifier and evaluate with 10-fold cross-validation for MR, CR, Subj, MPQA and use the train/test split for TREC. A L2 regularization weight was chosen via cross-validation (therefore nested cross-validation for the first 4 datasets).

Our model consists of a simple sentence encoder (a 1D-convolution + ReLU + mean-pooling) that embeds a whole sentence into a 2400-dimension vector , followed by a GRU (2400 hidden units) which predicts up to 3 future sentence embeddings with the contrastive loss to form . We used Adam optimizer with a learning rate of 2e-4 trained on 8 GPUs, each with a batch size of 64. We found that more advanced sentence encoders did not significantly improve the results, which may be due to the simplicity of the transfer tasks (e.g., in MPQA most datapoints consists of one or a few words), and the fact that bag-of-words models usually perform well on many NLP tasks wang2012nlpclassification .

Results on evaluation tasks are shown in Table 5 where we compare our model against other models that have been used using the same datasets. The performance of our method is very similar to the skip-thought vector model, with the advantage that it does not require a powerful LSTM as word-level decoder, therefore much faster to train. Although this is a standard transfer learning benchmark, we found that models that learn better relationships in the childeren books did not necessarily perform better on the target tasks (which are very different: movie reviews etc). We note that better zhao2015self ; radford2017learning results have been published on these target datasets, by transfer learning from a different source task.

3.4 Reinforcement Learning

Figure 6: Reinforcement Learning results for 5 DeepMind Lab tasks used in lasse2018impala . Black: batched A2C baseline, Red: with auxiliary contrastive loss.

Finally, we evaluate the proposed unsupervised learning approach on five reinforcement learning in 3D environments of DeepMind Lab beattie2016deepmind : rooms_watermaze, explore_goal_locations_small, seekavoid_arena_01, lasertag_three_opponents_small and rooms_keys_doors_puzzle.

This setup differs from the previous three. Here, we take the standard batched A2C mnih2016asynchronous agent as base model and add CPC as an auxiliary loss. We do not use a replay buffer, so the predictions have to adapt to the changing behavior of the policy. The learned representation encodes a distribution over its future observations.

Following the same approach as lasse2018impala

, we perform a random search over the entropy regularization weight, the learning-rate and epsilon hyperparameters for RMSProp

hinton2012neural . The unroll length for the A2C is 100 steps and we predict up to 30 steps in the future to derive the contrastive loss. The baseline agent consists of a convolutional encoder which maps every input frame into a single vector followed by a temporal LSTM. We use the same encoder as in the baseline agent and only add the linear prediction mappings for the contrastive loss, resulting in minimal overhead which also showcases the simplicity of implementing our method on top of an existing architecture that has been designed and tuned for a particular task. We refer to lasse2018impala for all other hyperparameter and implementation details.

Figure 6 shows that for 4 out of the 5 games performance of the agent improves significantly with the contrastive loss after training on 1 billion frames. For lasertag_three_opponents_small, contrastive loss does not help nor hurt. We suspect that this is due to the task design, which does not require memory and thus yields a purely reactive policy.

4 Conclusion

In this paper we presented Contrastive Predictive Coding (CPC), a framework for extracting compact latent representations to encode predictions over future observations. CPC combines autoregressive modeling and noise-contrastive estimation with intuitions from predictive coding to learn abstract representations in an unsupervised fashion. We tested these representations in a wide variety of domains: audio, images, natural language and reinforcement learning and achieve strong or state-of-the-art performance when used as stand-alone features. The simplicity and low computational requirements to train the model, together with the encouraging results in challenging reinforcement learning domains when used in conjunction with the main loss are exciting developments towards useful unsupervised learning that applies universally to many more data modalities.

5 Acknowledgements

We would like to thank Andriy Mnih, Andrew Zisserman, Alex Graves and Carl Doersch for their helpful comments on the paper and Lasse Espeholt for making the A2C baseline available.

References

Appendix A Appendix

a.1 Estimating the Mutual Information with InfoNCE

By optimizing InfoNCE, the CPC loss we defined in Equation 4, we are maximizing the mutual information between and (which is bounded by the MI between and ). This can be shown as follows.

As already shown in Section 2.3, the optimal value for is given by . Inserting this back in to Equation 4 and splitting into the positive example and the negative examples results in:

(6)
(7)
(8)
(9)
(10)
(11)

Therefore, . This trivially also holds for other that obtain a worse (higher) . Equation 8 quickly becomes more accurate as increases. At the same time also increases, so it’s useful to use large values of .

InfoNCE is also related to MINE [54]. Without loss of generality let’s write , then

(12)
(13)
(14)
(15)

is equivalent to the MINE estimator (up to a constant). So we maximize a lower bound on this estimator. We found that using MINE directly gave identical performance when the task was non-trivial, but became very unstable if the target was easy to predict from the context (e.g., when predicting a single step in the future and the target overlaps with the context).