Representation Learning with Video Deep InfoMax

07/27/2020 ∙ by R Devon Hjelm, et al. ∙ 0

Self-supervised learning has made unsupervised pretraining relevant again for difficult computer vision tasks. The most effective self-supervised methods involve prediction tasks based on features extracted from diverse views of the data. DeepInfoMax (DIM) is a self-supervised method which leverages the internal structure of deep networks to construct such views, forming prediction tasks between local features which depend on small patches in an image and global features which depend on the whole image. In this paper, we extend DIM to the video domain by leveraging similar structure in spatio-temporal networks, producing a method we call Video Deep InfoMax(VDIM). We find that drawing views from both natural-rate sequences and temporally-downsampled sequences yields results on Kinetics-pretrained action recognition tasks which match or outperform prior state-of-the-art methods that use more costly large-time-scale transformer models. We also examine the effects of data augmentation and fine-tuning methods, accomplishingSoTA by a large margin when training only on the UCF-101 dataset.



There are no comments yet.


page 3

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We begin by considering difficult real-world machine learning tasks involving high-dimensional vision data 

(e.g., Soomro et al., 2012; Lin et al., 2014; Russakovsky et al., 2015; Everingham et al., 2015; Krishna et al., 2017; Song et al., 2017; Sharma et al., 2018; Sun et al., 2019b, to name a few), such as those used to train complex vision systems to make decisions in real-world environments. Successful vision systems often rely on internal representations that encode information that translates, through further processing, to correct decisions on these downstream tasks. Training systems to encode useful information is a core focus of research on representation learning, which has provided crucial support for the surge in useful AI tools across numerous domains (Brock et al., 2018; Tan and Le, 2019; Liu et al., 2019; Clark et al., 2019; Brown et al., 2020).

While supervised learning has received significant attention, unsupervised learning holds great promise for representation learners because it derives the learning signal from implicit structure in the data itself, rather than from external human annotation 

(Goodfellow et al., 2016). Annotation can be problematic for various reasons: because it is expensive to collect (Zhu and Goldberg, 2009), because it is prone to human error (Devillers et al., 2005; Nettleton et al., 2010), or because it provides no guarantees when transferring the resulting representations to other tasks (Sylvain et al., 2019).

Unsupervised learning helps to discover representations that more closely resemble a code of the underlying factors of the data, thus implying a broader range of utility in deployment scenarios (Bengio et al., 2013)

. For instance, successfully identifying an airplane in an image could rely on identifying sky in the background – a feature that may not translate well to other tasks. In contrast, models capable of generating data representative of a full target distribution must contain some understanding of the underlying factors, such as the parts of the airplane and their spatial relationships. Since this kind of understanding encompasses simpler tasks like classification, we expect models that encode these factors to be able to classify well 

(Donahue et al., 2016; Dumoulin et al., 2016; Donahue and Simonyan, 2019) as a corollary, and to perform well on other downstream tasks (Ulusoy and Bishop, 2005; Pathak et al., 2016; Radford et al., 2018).

Self-supervised learning (Doersch et al., 2015; Doersch and Zisserman, 2017; Kolesnikov et al., 2019; Goyal et al., 2019) involves unsupervised methods which procedurally generate supervised tasks based on informative structure in the underlying data. These methods often require generating multiple views from the data, e.g. by: selecting random crops (Bachman et al., 2019), shuffling patches (Noroozi and Favaro, 2016), rotating images (Jing and Tian, 2018), removing color information (Zhang et al., 2016), applying data augmentation (Tian et al., 2019)

, etc. The model is then trained to predict various relationships among these views, such as identifying whether two random crops belong to the same image, predicting how to order shuffled patches, predicting the way an image has been rotated, etc. Deciding how to generate useful views is central to the success of self-supervised learning, as not all view-based tasks will encourage the model to learn good features – e.g., models can “cheat" on colorization tasks in the RGB space 

(Zhang et al., 2016). Despite this challenge, self-supervised learning has led to accelerated progress in representation learning, making unsupervised methods competitive with supervised learning on many vision tasks (Chen et al., 2020; Caron et al., 2020).

While many of the success stories in self-supervised learning for vision come from research on static images, this setting seems inherently limited: static images contain little information about the dynamic nature of objects, their permanence and coherence, and their affordances compared to videos. In this work, we explore learning video representations using Deep InfoMax (Hjelm et al., 2018; Bachman et al., 2019)

, a contrastive self-supervised method that generates views by data augmentation and by selecting localized features expressed within a deep neural network encoder. DIM provides an opportunity to explore the role of views in the spatio-temporal setting, as these can be generated by leveraging the natural structure available from a spatio-temporal convolutional neural network, in addition to data augmentation and subsampling techniques. We use views in conjunction with a contrastive loss 

(Oord et al., 2018) to learn representations in an unsupervised way, and we evaluate learned representations by fine-tuning them for downstream action classification. We achieve comparable or better performance on action recognition with less pretraining compared to larger transformer models on the Kinetics dataset (Carreira et al., 2018), and surpass prior work by a large margin with a model trained using only the UCF- dataset (Soomro et al., 2012). We also provide a detailed ablation on the fine-tuning procedure, which is crucial to downstream performance but understudied in prior related work (Han et al., 2019; Sun et al., 2019a).

2 Background and Related Work

Machine learning classification systems in vision often surpass human performance on static images (He et al., 2016; Touvron et al., 2020), yet harder tasks that rely on understanding the objectness or affordances of an image subject or subjects, such as localization, semantic segmentation, etc. still pose major challenges for vision systems (Everingham et al., 2015; Hall et al., 2020). This is in contrast to human perception, where understanding these properties is at least as fundamental, if not more so, as the ability to reason about an object’s class (Tenenbaum et al., 2011). But, unlike machine learning systems, human experience of the world is dynamic and interactive, so it is perhaps unreasonable to expect a model to learn about objects and their properties without being exposed to data or interacting with it in a similar way.

One wonders then why so many important vision systems meant to function in a dynamic world are largely based on models trained on static images (Yuan et al., 2019). The reasons have been mostly practical and computational. One obvious challenge is that annotation for dynamic/interactive data, like video, is more time-consuming and costly to collect (Vondrick et al., 2013). In addition, labels of dynamic processes recorded in video, such as action, intent, etc., can be highly subjective, and their annotation requires models to adhere to a particular human interpretation of a complex scene (Reidsma and op den Akker, 2008). These challenges suggest that minimizing reliance on supervised learning is an appealing approach for learning representation-based systems of video.

In this direction, unsupervised video representations have begun to show promise (Wang and Gupta, 2015; Srivastava et al., 2015). Generative models for video have improved in recent work (Clark et al., 2019), but these may be too demanding: the ability to generate a dynamic scene with pixel-level detail may not be necessary for learning useful video representations. Self-supervised learning asks less of models and has made progress in learning useful video representations that perform well on a number of important benchmark tasks (Zou et al., 2012; Misra et al., 2016; Fernando et al., 2017; Sermanet et al., 2018; Vondrick et al., 2018; Jing and Tian, 2018; Ahsan et al., 2019; Dwibedi et al., 2019; El-Nouby et al., 2019; Kim et al., 2019; Xu et al., 2019). Contrastive learning is a popular type of self-supervised learning, which has proven effective for video (Gordon et al., 2020; Knights et al., 2020)

and reinforcement learning tasks 

(Oord et al., 2018; Anand et al., 2019; Srinivas et al., 2020; Mazoure et al., 2020; Schwarzer et al., 2020)

. The most successful contrastive methods for video borrow components from models that process language data, such as recurrent neural networks 

(Han et al., 2019) and transformers (Sun et al., 2019a). The aptness of these architectures and their associated priors for video is based on analogy to discrete, language-like data, but it is still unclear whether these models justify their expense in the video setting. In contrast, we focus on contrastive learning using only convolutions to process the spatio-temporal data, leveraging the natural internal structure of a spatio-temporal CNN to yield diverse views of the data. Finally, we use downsampling to generate multiple views as additional augmentation (reminiscent of slow-fast networks for video processing, see Feichtenhofer et al., 2019), and we show that this is important to downstream performance.

3 Method

Figure 1: Left: High-level view of VDIM. We first transform a sequence of frames from a video clip using a randomly generated view operator. This splits the sequence into multiple subsequences, which may overlap, and applies random data augmentation to each subsequence. We then process each subsequence independently using a spatio-temporal encoder to extract features representing different views of the data, which differ due to the random view operator and limited receptive fields in the encoder. We then maximize mutual information between the features from the various views. Right: Contrastive learning on local features. Each set of local features are used in a set of k-way classification tasks. Each pair of local features from the “antecedent" features (in blue, can be thought of as a “query") and the correct “consequent" features (in green, can be though of as the correct “key") are compared against all local features coming from negative samples.

Our model draws components from Augmented Multiscale Deep InfoMax (AMDIM, Bachman et al., 2019)

, whose architecture was specifically designed to encode features that depend on limited receptive fields. As is standard in AMDIM, our model has three primary components: a “view generator", a joint encoder + mutual information-estimating architecture, and a loss based on a lower bound on the mutual information. We illustrate our model in Figure 


3.1 The view generator

We process the input, which in our case is a sequence of frames (each example is a 4D tensor with dimensions corresponding to time, height, width, and color channels), by a random augmentation operator that generates different views. This can be understood as a single operator

, which maps an input sequence in to a set of subsequences in . For example, the operator might first sample two subsequences, apply downsampling to one or both, then apply pixel-wise data augmentation to every frame in each subsequence.

As is evident in other self-supervised and contrastive methods, the exact form of the “view operator" is crucial to success in downstream tasks (Tian et al., 2019; Chen et al., 2020). In this work, we compose from the following operations:

  1. First, a sequence of frames (a video clip) is drawn from the data. In order to standardize the input to a fixed number of time steps, we randomly draw a set of clips from a fixed length moving window with stride of 1. If the total clip length is less than this window size, we fill by repeating the last frame.

  2. The clip is then split into two possibly overlapping sequences of frames. The size of these subsequences depends on the downsampling we describe below, so these subsequences are chosen to ensure that each final subsequence is the same length. For example, for two disjoint subsequences, if the first subsequence is not downsampled and the second is downsampled by a factor of , the first subsequence receives of the frames and the second receives the remaining .

  3. Each subsequence is independently downsampled by a predetermined factor, resulting in two subsequences of equal length.

  4. Data augmentation is applied on each subsequence independently, with every frame within a subsequence being augmented with the same augmentation parameters. The augmentations chosen here are the standard ones from image tasks: random resized crop, random color jitter, random grey, and random rotation. As an alternative to random color jitter and grey, we can convert each frame from RGB to Lab, and then randomly drop a color channel.

As the above augmentation is random, we can consider this process as drawing a view operator from a distribution (which we call the “view generator"), such that . The view generator is specified by the operations, their ordering, as well as any parameters associated with random sampling, such as uniform sampling over brightness or contrast. Because generates multiple views from the same input, we refer to the set of augmented views given input as .

3.2 Architecture

Our architecture is a combination of those found in AMDIM and R(2+1)D (Tran et al., 2018), similar to S3D (Xie et al., 2017) as used in our strongest baseline (Sun et al., 2019a). R(2+1)D is a convolutional recurrent block that first applies 2D convolutions to the spatial dimension of feature maps, followed by a 1D convolution to the temporal dimension. In our version, we take the AMDIM base architecture, expanding convolutions that directly follow spatial convolutions to operate as a 1D convolutional layer in the temporal dimension. This is more lightweight than using full 3D convolutions and maintains the property of the AMDIM architecture that features relatively deep in the network are still “local" – that is, they correspond to limited spatio-temporal patches of the input (i.e., do not have receptive fields that cover the complete input, as is the case with a standard ResNet, see He et al., 2016).

Let denote our encoder, which transforms augmented video subsequences into a set of feature spaces, . Each feature space is a product of local feature spaces, , where the are spanned by features that depend on a specific spatio-temporal “patch" of the input. The feature spaces are naturally ordered, such that features in later become progressively less “local" (they depend on progressively larger spatio-temporal patches) as the input is processed layer-by-layer through the encoder. The number of local spaces also decreases (i.e., ), finally resulting in a “global" feature space that depends on the full input.

3.3 Mutual information estimation and maximization

Mutual information can be estimated with a classifier (Belghazi et al., 2018; Hjelm et al., 2018; Oord et al., 2018; Poole et al., 2019), in our case by training a model to discern sets of features extracted from the same video clip from features drawn at random from the product of marginals over a set of clips. Let us denote the local features independently processed by the encoder, , from the two augmented versions of an input sequence, , as and , resulting from applying on . Let denote a subset of all layers in the encoder where we will apply a mutual information maximization objective via a contrastive loss. We introduce a set of contrastive models, each of which operate on a single layer across all locations, meant to process the features to a space where we can compute pairwise scores (in our case, using a dot-product). As in AMDIM,

is modeled by MLPs with residual connections

111We implement the location-wise MLP using convolutions for efficient parallel processing..

We then estimate the mutual information between all pairs of local features from select layers: for every layer pair and location pair tuple, , we compute a scalar-valued score as:


Next, we require scores corresponding to “negative" counterfactuals to be compared against the “positive" pairs. We compute these negative scores from pairs obtained from two possibly different inputs, (e.g., different subsequences from different video clips):


where the view operator is drawn independently of . Let denote a set of sequences sampled at random from the dataset. The infoNCE lower-bound (Oord et al., 2018) to the mutual information associated with the tuple is:


Our final objective maximizes the mutual information across all locations over the complete set of “antecedent" feature layers, , and “consequent" feature layers, :


As our model is based on Deep InfoMax, we call this extension to spatio-temporal video data Video Deep InfoMax (VDIM). We draw the contrasting set from other samples in the same batch as .

4 Experiments

We now demonstrate that models pretrained trained with Video Deep InfoMax (VDIM) perform well on downstream action recognition tasks for video data. We first conduct an ablation study on the effect of the view generator, , showing that the type of augmentation has an effect on downstream performance. We then pretrain on a large-scale video dataset, showing that such large-scale pretraining yields impressive results.

We use the following datasets in our study:

  • UCF- (Soomro et al., 2012) is an action recognition benchmarking dataset with classes and videos. UCF-101 has three train-test splits. Videos are anywhere from a few seconds to minutes long (most are seconds). For Kinetics-pretrained models, we evaluate on all three splits, while for UCF-101 pretrained models, we pretrain and evaluate on only the first split to ensure that training samples are aligned across pretraining and downstream evaluation.

  • HMDB- (Human motion database, Kuehne et al., 2011) is another publicly available dataset for benchmarking action recognition models. HMDB is composed of human actions from clips. HMDB- differs from UCF- in that the clips are generally much shorter (most are under seconds long).

  • Kinetics  (Carreira et al., 2018) is a video dataset of roughly videos from Youtube corresponding to action categories that is commonly used for pretraining and downstream evaluation. Unlike UCF- and HMDB-, Kinetics is not a truly “public" dataset, as access to all samples is difficult or impossible222For instance, when videos are removed from YouTube.. Our version of Kinetics was acquired in March 2020, with training videos and test videos.

For all of the video datasets in training, we sample sequences of frames by first sampling a video clip, followed by subsampling a set of contiguous frames. Each frame is resized to during random-resized crop augmentation, and the final length of every sequence is chosen to ensure that subsequences that represent views are frames long after downsampling. All models are optimized with Adam with a learning rate of and batch norm is used on the output of the encoder as well as in the contrastive models, .

a) b)
Figure 2: a) Basic convolutional block (R(2+1)D, Tran et al., 2018) used in our model. b) Residual block composed of a single residual convolutional block followed by R(2+1)D blocks. The convolutions help control the receptive fields of features, allowing VDIM to draw views by selecting high-level features.
Index Block Size View?
1 ConvBlock
2 ConvBlock
3 ResBlock
4 ResBlock
5 ResBlock Local 1
6 ResBlock Local 2
7 ResBlock
8 ResBlock Global
Table 1: The full encoder model using components in Figure 2. are sizes in the batch, time, width, and height dimensions, respectively.
Figure 3: Left: Pretraining The view operator, selects two subsequences, the second with a fixed offset, downsamples the second sequence, and applies data augmentation to both independently. The second subsequence is selected such that the final length after downsampling matches the first (in this case, for a final length of frames). These subsequences are used as input to the encoder, , and the outputs (from select antecedent feature layers and consequent feature layers ) are used as input to a pair of contrastive models . In one variation of VDIM, the features from the second subsequence are split temporally, and their element-wise difference is sent to the contrastive model. Right: Classification For downstream evaluation, selects a set of subsequences, each with a fixed offset, and downsampling is performed on each along with independent data augmenation. The output features of the encoder are concatenated (feature-wise) and used as input to a small classifier.

Architecture details are provided in Table 1, with model components depicted in Figure 2. The architecture is modeled after AMDIM, which limits features’ receptive fields (we call these local features) until the last two residual blocks (which we call global features, since their receptive fields cover the entire input). We pick to cover a reasonably representative set of views: one global and two locals with differently-sized receptive fields, while are chosen to correspond only to the global view to save memory. The reverse, and , is also used. The contrastive models, , are residual MLPs with a single hidden layer, and the output features have size .

We present a high-level overview of the model used in pretraining in Figure 3 (left). During pretraining, the view operator selects the first view as the first frames. The second view is drawn from a fixed offset, either or frames. The length of the second view is selected such that, after downsampling, the final length is frames. The outputs of the first view are selected from , the second are selected from , and each contrastive model, is used in Equations 1 and 2 to yield the contrastive loss. In one variant, the features of the second view are split in half temporally, and features constructed from the temporal difference are used in the contrastive loss.

4.1 The effects of hyperparameters on fine-tuning

On our set of benchmarks, key results presented in prior work come from first doing unsupervised pretraining with a large video dataset, then fine-tuning the encoder end-to-end on the benchmark task using a supervised loss. We found that evaluation results varied widely depending on the fine-tuning procedure, mostly due to overfitting effects, because we lack any protections typically used in supervised learning such as dropout (Srivastava et al., 2014). Prior works rely on a decaying learning rate schedule (Sun et al., 2019a)

, but do not specify the schedule and/or do not provide experimental results that support their schedules. To fill this gap in understanding, we undertook an ablation study on relevant hyperparameters.

A high-level overview of downstream evaluation is given in Figure 2 (right). The view operator selects subsequences, each of which has an offset that is a multiple of (e.g., , , , etc). The subsequence is selected such that, after downsampling, the final length of each subsequence is frames. Each subsequence is processed by the encoder, the outputs are concatenated, and the final set of features are sent to a classifier with a single hidden layer of size with dropout (). For the final encoder at evaluation, we use

. For action recognition evaluation on the test split, we average the class assignment probabilities across all samples from the same video clip.

Figure 4: Ablation over initial learning rate and decay factor. An optimum set of hyperparameters in downstream fine-tuning was found.

We perform the following ablations on the above structure on a pretrained UCF-101 model (downsampling of 2, color jitter + random grayscale, difference-based contrastive learning, no rotation; see the next section for details), fine-tuning on UCF-101 and evaluating on the UCF-101 test set:

  1. We selected over the number of views, (larger view counts hit memory limits), and found that more views translated to better performance (Pearson test two-tailed ).

  2. Lab dropout vs color jitter + random grayscale. We found that Lab dropout improved downstream performance ().

  3. We selected downsampling factors from and found that less downsampling gave better results ().

  4. We varied the initial learning rate along with the decay schedule. In this case, we decayed the learning rate every steps at a fixed decay factor. Results are presented in Figure 4.

Based on this ablation study, we adopt a consistent fine-tuning procedure across all subsequent experiments: we use Lab dropout (instead of color jitter and random grayscale), no downsampling, views, an initial learning rate of , and a learning rate decay factor of .

4.2 Ablation study on the effect of the self-supervised task layers and view generator on downstream performance

We test VDIM, pretraining for steps and fine-tuning for steps on the UCF- dataset, ablating the augmentation across the following axes:

  1. Selecting fewer layers for the antecedent features, , comparing to and . Removing layer had little effect, reducing performance by less than . Removing both layers and , however, greatly reduced performance, resulting in nearly a drop.

  2. Lab dropout vs random color jitter and grayscale. This had the strongest effect on downstream performance, with color jitter + random gray performing the best ().

  3. Randomly rotating the clip (spatially, the same rotation across the same view, maintaining the temporal order). No discernable effect of using rotation was found ().

  4. With two sequences of frames, we select the second set to have a fixed offset of either (same start) or frames after the beginning of the first clip. We found that using no offset performed slightly better ().

  5. Taking the temporal difference with consequent features (see Figure 3). We found this improved performance ().

  6. We downsampled the second clip by a factor of (no downsampling), , or . We found that when measured across all experimental parameters above, downsampling had a very small effect. However, when used in conjunction with difference features and color jitter, there is a small benefit to using downsampling (). Increasing downsampling beyond had no significant effect on downstream performance.

From these ablation experiments, the best model uses color jitter + random grayscale, no random rotation, and temporal differences for the consequent features from a subsequence with offset and a downsampling factor of .

Model Test acc (%) Params
Shuffle and Learn (Misra et al., 2016) 50.9
O3N (Fernando et al., 2017) 60.3
DPC (Han et al., 2019) 60.6 14.2M
Skip-Clip (El-Nouby et al., 2019) 64.4 14.2M
TCE (Knights et al., 2020) 67.0
VDIM (avg ablate: color jitt) 69.0 17.3M
VDIM (avg ablate: color jitt, down, and diff feat) 71.1
VDIM (ablation best) 74.1
Table 2: Results on fine-tuning on UCF- after pretraining on UCF-. Split 1 train / test was used in both pretraining and fine-tuning. Parameter counts are included for reference, when available. As we performed an ablation using the UCF- test set for model selection, some averages across the ablation are provided for reference.

Our final comparison to several baselines pretrained and fine-tuned on UCF-101 is presented in Table 2. We compare against Shuffle and Learn (Misra et al., 2016), Dense Predictive Coding (DPC, Han et al., 2019), Temporally Coherent Embeddings (TCE, Knights et al., 2020), O3N (Fernando et al., 2017), and Skip-Clip (El-Nouby et al., 2019). Even if we consider the average performance of fine-tuned VDIM models pretrained under different sets of hyperparameters (across all ablation hyperparameters except using color-jitter and random grayscale, as this had the most obvious effect), VDIM outperforms the best baseline by on pretraining and fine-tuning on UCF-101. Our best result is the highest test accuracy known to us when training only on UCF-101 and without hand-crafted features (e.g., see Zou et al., 2012).

4.3 Kinetics-pretraining for downstream tasks

We now show that pretraining VDIM on a large-scale dataset, Kinetics  Carreira et al. (2018), yields strong downstream performance on the UCF-101 and HMDB- datasets. We pretrained on the Kinetics dataset for k steps with a batch size of on GPUs with no rotation, no temporal difference333We omit temporal difference in this experiment for now because of unrelated hardware issues., downsampling of , and color jitter + random grayscale. For HMDB-, the clips are generally much shorter than in UCF-

, resulting in 6-7 times fewer minibatches per epoch, so we adjusted the decay schedule from

steps to steps to accommodate444No hyperparameter tuning was performed on this decay schedule; we also reduced the number of views to . We pretrain on the Kinetics dataset using the same procedure described in the ablation studies above, but for training steps for UCF- and steps for HBDB-. We fine-tune independently on all three UCF- or HMDB- splits and compute the final test accuracy from the average over test splits.

Model UCF-101 acc HMDB acc Parameters
3D-RotNet (Jing and Tian, 2018) 62.9 33.7 33.6M
DPC (Han et al., 2019) 75.7 35.7 32.6M
CBT (Sun et al., 2019a) 79.5 44.5 20M
VDIM 79.7 49.2 17.3M
Table 3: A comparison of models pretrained on Kinetics- and fine-tuned on either UCF-101 or HMDB-51. Average test performance across all three splits is reported.

We present results in Table 3. Baselines include Contrastive Bidirectional Transformers (CBT, Sun et al., 2019a)555The number of parameters for CBT was calculated from the stated 11M parameters for the visual transformer plus 9M parameters for S3D as stated in Xie et al. (2017)., DPC, and 3D-RotNet (Jing and Tian, 2018). CBT uses curriculum learning during pretraining, first using rotation prediction (Jing and Tian, 2018), followed by a contrastive loss on features processed by a transformer. VDIM performs comparably to CBT on UCF- and outperforms on HMDB- by nearly , despite the greater compute cost for training CBT.

5 Conclusion and Acknowledgements

We show that extracting and using feature-wise views as well as video-specific data augmentation in contrastive learning is an effective way to pretrain video models for downstream action recognition. Specifically we show this can be done by extracting views from features naturally expressed in a deep spatio-temporal encoder, a method we call Video Deep InfoMax (VDIM). Our results show that there is strong promise in spatio-temporal encoders for unsupervised pretraining on video data, and that data augmentation on video data is an important component for unsupervised representation learning. Finally, we would like to thank Tong Wang for his important contributions in developing this work.


  • U. Ahsan, R. Madhok, and I. Essa (2019) Video jigsaw: unsupervised learning of spatiotemporal context for video action recognition. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 179–189. Cited by: §2.
  • A. Anand, E. Racah, S. Ozair, Y. Bengio, M. Côté, and R. D. Hjelm (2019) Unsupervised state representation learning in atari. In Advances in Neural Information Processing Systems, pp. 8766–8779. Cited by: §2.
  • P. Bachman, R. D. Hjelm, and W. Buchwalter (2019) Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pp. 15509–15519. Cited by: §1, §1, §3.
  • M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y. Bengio, A. Courville, and D. Hjelm (2018) Mutual information neural estimation. In International Conference on Machine Learning, pp. 531–540. Cited by: §3.3.
  • Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35 (8), pp. 1798–1828. Cited by: §1.
  • A. Brock, J. Donahue, and K. Simonyan (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §1.
  • T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Cited by: §1.
  • M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin (2020) Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882. Cited by: §1.
  • J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, and A. Zisserman (2018) A short note about kinetics-600. arXiv preprint arXiv:1808.01340. Cited by: §1, 3rd item, §4.3.
  • T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §1, §3.1.
  • A. Clark, J. Donahue, and K. Simonyan (2019) Adversarial video generation on complex datasets. arXiv preprint arXiv:1907.06571. Cited by: §1, §2.
  • L. Devillers, L. Vidrascu, and L. Lamel (2005) Challenges in real-life emotion annotation and machine learning based detection. Neural Networks 18 (4), pp. 407–422. Cited by: §1.
  • C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422–1430. Cited by: §1.
  • C. Doersch and A. Zisserman (2017) Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2051–2060. Cited by: §1.
  • J. Donahue, P. Krähenbühl, and T. Darrell (2016) Adversarial feature learning. arXiv preprint arXiv:1605.09782. Cited by: §1.
  • J. Donahue and K. Simonyan (2019) Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, pp. 10541–10551. Cited by: §1.
  • V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville (2016) Adversarially learned inference. arXiv preprint arXiv:1606.00704. Cited by: §1.
  • D. Dwibedi, Y. Aytar, J. Tompson, P. Sermanet, and A. Zisserman (2019) Temporal cycle-consistency learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 1801–1810. Cited by: §2.
  • A. El-Nouby, S. Zhai, G. W. Taylor, and J. M. Susskind (2019) Skip-clip: self-supervised spatiotemporal representation learning by future clip order ranking. arXiv preprint arXiv:1910.12770. Cited by: §2, §4.2, Table 2.
  • M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman (2015) The pascal visual object classes challenge: a retrospective. International Journal of Computer Vision 111 (1), pp. 98–136. Cited by: §1, §2.
  • C. Feichtenhofer, H. Fan, J. Malik, and K. He (2019) Slowfast networks for video recognition. In Proceedings of the IEEE international conference on computer vision, pp. 6202–6211. Cited by: §2.
  • B. Fernando, H. Bilen, E. Gavves, and S. Gould (2017)

    Self-supervised video representation learning with odd-one-out networks

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3636–3645. Cited by: §2, §4.2, Table 2.
  • I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §1.
  • D. Gordon, K. Ehsani, D. Fox, and A. Farhadi (2020) Watching the world go by: representation learning from unlabeled videos. arXiv preprint arXiv:2003.07990. Cited by: §2.
  • P. Goyal, D. Mahajan, A. Gupta, and I. Misra (2019) Scaling and benchmarking self-supervised visual representation learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6391–6400. Cited by: §1.
  • D. Hall, F. Dayoub, J. Skinner, H. Zhang, D. Miller, P. Corke, G. Carneiro, A. Angelova, and N. Sünderhauf (2020) Probabilistic object detection: definition and evaluation. In IEEE Winter Conference on Applications of Computer Vision (WACV), Cited by: §2.
  • T. Han, W. Xie, and A. Zisserman (2019) Video representation learning by dense predictive coding. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0. Cited by: §1, §2, §4.2, Table 2, Table 3.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §2, §3.2.
  • R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio (2018) Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Cited by: §1, §3.3.
  • L. Jing and Y. Tian (2018) Self-supervised spatiotemporal feature learning by video geometric transformations. arXiv preprint arXiv:1811.11387 2 (7), pp. 8. Cited by: §1, §2, §4.3, Table 3.
  • D. Kim, D. Cho, and I. S. Kweon (2019) Self-supervised video representation learning with space-time cubic puzzles. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    Vol. 33, pp. 8545–8552. Cited by: §2.
  • J. Knights, A. Vanderkop, D. Ward, O. Mackenzie-Ross, and P. Moghadam (2020) Temporally coherent embeddings for self-supervised video representation learning. arXiv preprint arXiv:2004.02753. Cited by: §2, §4.2, Table 2.
  • A. Kolesnikov, X. Zhai, and L. Beyer (2019) Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1920–1929. Cited by: §1.
  • R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, et al. (2017) Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123 (1), pp. 32–73. Cited by: §1.
  • H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre (2011) HMDB: a large video database for human motion recognition. In 2011 International Conference on Computer Vision, pp. 2556–2563. Cited by: 2nd item.
  • T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1.
  • B. Mazoure, R. T. d. Combes, T. Doan, P. Bachman, and R. D. Hjelm (2020) Deep reinforcement and infomax learning. arXiv preprint arXiv:2006.07217. Cited by: §2.
  • I. Misra, C. L. Zitnick, and M. Hebert (2016) Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision, pp. 527–544. Cited by: §2, §4.2, Table 2.
  • D. F. Nettleton, A. Orriols-Puig, and A. Fornells (2010) A study of the effect of different types of noise on the precision of supervised learning techniques. Artificial intelligence review 33 (4), pp. 275–306. Cited by: §1.
  • M. Noroozi and P. Favaro (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69–84. Cited by: §1.
  • A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §1, §2, §3.3, §3.3.
  • D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536–2544. Cited by: §1.
  • B. Poole, S. Ozair, A. v. d. Oord, A. A. Alemi, and G. Tucker (2019) On variational bounds of mutual information. arXiv preprint arXiv:1905.06922. Cited by: §3.3.
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. Cited by: §1.
  • D. Reidsma and R. op den Akker (2008) Exploiting ‘subjective’annotations. In Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pp. 8–16. Cited by: §2.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet large scale visual recognition challenge. International Journal of Computer Vision (IJCV). Cited by: §1.
  • M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, and P. Bachman (2020) Data-efficient reinforcement learning with momentum predictive representations. Cited by: §2.
  • P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain (2018) Time-contrastive networks: self-supervised learning from video. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1134–1141. Cited by: §2.
  • P. Sharma, N. Ding, S. Goodman, and R. Soricut (2018)

    Conceptual captions: a cleaned, hypernymed, image alt-text dataset for automatic image captioning

    In Proceedings of ACL, Cited by: §1.
  • S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser (2017) Semantic scene completion from a single depth image. Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §1.
  • K. Soomro, A. R. Zamir, and M. Shah (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Cited by: §1, §1, 1st item.
  • A. Srinivas, M. Laskin, and P. Abbeel (2020) Curl: contrastive unsupervised representations for reinforcement learning. arXiv preprint arXiv:2004.04136. Cited by: §2.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §4.1.
  • N. Srivastava, E. Mansimov, and R. Salakhudinov (2015) Unsupervised learning of video representations using lstms. In International conference on machine learning, pp. 843–852. Cited by: §2.
  • C. Sun, F. Baradel, K. Murphy, and C. Schmid (2019a) Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743. Cited by: §1, §2, §3.2, §4.1, §4.3, Table 3.
  • P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov (2019b) Scalability in perception for autonomous driving: waymo open dataset. External Links: 1912.04838 Cited by: §1.
  • T. Sylvain, L. Petrini, and D. Hjelm (2019) Locality and compositionality in zero-shot learning. arXiv preprint arXiv:1912.12179. Cited by: §1.
  • M. Tan and Q. V. Le (2019) Efficientnet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946. Cited by: §1.
  • J. B. Tenenbaum, C. Kemp, T. L. Griffiths, and N. D. Goodman (2011) How to grow a mind: statistics, structure, and abstraction. science 331 (6022), pp. 1279–1285. Cited by: §2.
  • Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive multiview coding. arXiv preprint arXiv:1906.05849. Cited by: §1, §3.1.
  • H. Touvron, A. Vedaldi, M. Douze, and H. Jégou (2020) Fixing the train-test resolution discrepancy: fixefficientnet. arXiv preprint arXiv:2003.08237. Cited by: §2.
  • D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459. Cited by: §3.2, Figure 2.
  • I. Ulusoy and C. M. Bishop (2005) Generative versus discriminative methods for object recognition. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, pp. 258–265. Cited by: §1.
  • C. Vondrick, D. Patterson, and D. Ramanan (2013) Efficiently scaling up crowdsourced video annotation. International journal of computer vision 101 (1), pp. 184–204. Cited by: §2.
  • C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Murphy (2018) Tracking emerges by colorizing videos. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 391–408. Cited by: §2.
  • X. Wang and A. Gupta (2015) Unsupervised learning of visual representations using videos. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: §2.
  • S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy (2017) Rethinking spatiotemporal feature learning for video understanding. arXiv preprint arXiv:1712.04851 1 (2), pp. 5. Cited by: §3.2, footnote 5.
  • D. Xu, J. Xiao, Z. Zhao, J. Shao, D. Xie, and Y. Zhuang (2019) Self-supervised spatiotemporal learning via video clip order prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10334–10343. Cited by: §2.
  • Y. Yuan, X. Chen, and J. Wang (2019) Object-contextual representations for semantic segmentation. arXiv preprint arXiv:1909.11065. Cited by: §2.
  • R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In European conference on computer vision, pp. 649–666. Cited by: §1.
  • X. Zhu and A. B. Goldberg (2009)

    Introduction to semi-supervised learning

    Synthesis lectures on artificial intelligence and machine learning 3 (1), pp. 1–130. Cited by: §1.
  • W. Zou, S. Zhu, K. Yu, and A. Y. Ng (2012) Deep learning of invariant features via simulated fixations in video. In Advances in neural information processing systems, pp. 3203–3211. Cited by: §2, §4.2.