Learning transferable visual representations is a key challenge in computer vision. The aim is to learn a representation function once, such that it may be transferred to a plethora of downstream tasks. In the context of image classification, models trained on large amounts of labeled data excel in transfer learning, but there is a growing concern that this approach may not be effective for more challenging downstream tasks 
. Recent advances, such as contrastive self-supervised learning combined with strong data augmentation, present a promising avenue.
We consider the problem of learning image representations from uncurated videos. While these videos are noisy and unlabeled, they contain abundant natural variations of the present objects. Furthermore, videos decompose temporally into a hierarchy of videos, shots, and frames which can be used to define pretext tasks for self-supervised learning 
. We extend this hierarchy to it’s natural continuation, namely, the spatial decomposition of frames into objects. We then use the “video, shot, frame, object” hierarchy to define a more holistic pre-text task. In this setting we are hence given uncurated videos and an off-the-shelf pre-trained object detector, and we propose a method of supplementing the loss function with cues at the object level.
Videos, at the frame and shot level, convey global scene structure, and different frames and shots provide a natural data augmentation of the scene. This makes videos a good fit for contrastive learning losses that rely on heavy data augmentation for learning scene representations 
. At the object level, videos also provide rich information about the structure of individual objects. This can be valuable for tasks such as orientation estimation, counting, and object detection. Furthermore, object-centric representations can generalize to scenes constituted as a novel combination of known objects. Intuitively, each occurrence of the object forms a natural augmentation for objects of that class. Finally, one can make use of the fact that the same object appears in consecutive frames to enable representations which are more robust to perturbations and distributions shifts. Contrastive learning in this setting is illustrated inFigure 1.
We extend the framework from VIVI  to include object level cues using an off-the-shelf object detector.
We demonstrate improvements using object level cues on recent few-shot transfer learning benchmarks and out-of-distribution generalization benchmarks. In particular, the method improves over the baseline on all 18/19 few-shot learning tasks and 8/8 out-of-distribution generalization tasks.
We ablate various aspects of the setup to reveal the source of the observed benefits, including (i) randomizing the object classes and locations, (ii) using only the object labels, (iii) using the detector as a classifier in a semi-supervised setup, (iv) cotraining withImageNet labels, and (v) using larger ResNet models.
2 Related work
Self-supervised image representation learning.
The self-supervised signal is provided through a pretext task (e.g. converting the problem to a supervised problem), such as reconstructing the input , predicting the spatial context [12, 56], learning to colorize the image , or predict the rotation of the image . Other popular approaches include clustering [5, 81, 17] and recently generative modeling [14, 41, 15]. A promising recent line of work casts the problem as mutual information maximization of representations of different views of the same image . These views can come from augmentations or corruptions of the same input image [33, 2, 6, 27], or by considering different color channels as separate views .
Representation learning from videos.
The order of frames in a video provides a useful learning signal [54, 45, 18, 74]. Information from temporal ordering can be combined with spatial ordering to infer object relations  or co-occurrence statistics . Other pretext tasks include predicting the playback rate , or clustering .
Orthogonal to the pretext tasks, one could use the paradigm of slow feature learning in videos. This line of work dates back to , which developed a method to learn slow varying signals in time series, and inspired several recent works [26, 38, 84, 23]. Our loss at the frame level uses insights from slow feature learning in the form of temporal coherence [55, 58].
learn temporally coherent representations for the patches. Our method learns representations for the objects within a fully convolutional neural network. Other approaches have investigated learning specific structures to represent the objects in video[52, 25].
Object level supervision.
In terms of self-supervision we follow  to learn from the natural hierarchy present in the videos and make use of the losses studied therein. In contrast to  we incorporate object-level information in the final loss and show that it leads to benefits both for few-shot transfer learning and out-of-distribution generalization. Incorporating the pixel, object, and patch information for learning and improving video representations was also considered in [73, 70, 84, 83, 19]. In contrast to these works, we do not rely on a strong tracker trained on a similar distribution, but on an off-the-shelf, parameter efficient object detector. Furthermore, we learn representations for images, not videos. Contemporary works also use object information for learning video representations 
or for training graph neural networks on videos.
Self-supervision via video-shot-frame hierarchy.
A video can be decomposed into a hierarchy of shots, frames and objects which is illustrated in Figure 2. For the first two levels in the hierarchy, we follow the setup from , named VIVI, which we summarize here.
At the shot level, VIVI learns, in a contrastive manner, to embed shots such that they are predictive of other shots in the same video . At the frame level, VIVI learns to embed frames such that frames from the same shot are closer to each other relative to frames from other shots. In particular, the shot level loss is an instance of the InfoNCE loss  between shot representations. VIVI (1) maps frame in shot to a representation in video , (2) aggregates these frame representations into a shot representation , and (3) predicts the representation of the next shot, , given the sequence of preceding shots, , resulting in the loss:
where , called the critic, is used to compute similarity between shots, indicates the total numbers of videos in a mini batch, and indicates the number of prediction steps into the future. In practice, optimization is more stable when contrasting against shot representations from the entire batch of videos.
Contrastive learning is also applied at the frame level based on the intuition that frames within a shot typically contain similar scenes. VIVI learns to assign a higher similarity to frame representations coming from the same shot by applying a triplet loss defined in  (cf. Figure 2). In particular, for a fixed frame, frames coming from the same shot are considered as positives, while the frames from other shots are negatives. Denoting positive pairs as and negatives as , the semi-hard loss can be written as :
Extending the hierarchy with object-level losses.
Data augmentation is a key novelty behind recent advances in representation learning [53, 6, 67, 8]. These augmentations are usually obtained by applying synthetic transformations like random cropping, left-right flipping, color distortion, blurring, or adding noise. However, independent non-rigid movement of an object against background, as seen in real video data, is hard to expect from synthetic augmentations.
In order to exploit these natural augmentations, which occur in video, we use a contrastive loss that encourages representations of objects of the same category to be closer together as opposed to representations of different categories (cf. Figure 1). To construct this loss, we apply an off-the-shelf object detector to all frames and extract the bounding boxes and class labels. Given the representations of each bounding box (will be discussed later), we use a triplet loss where objects from the same class form positive pairs, and objects from different classes form negative pairs.
In particular, given the embedding of the b-th bounding box , and the embeddings of the positive and negative (with respect to ), we apply the following loss per frame:
Representations of bounding boxes.
A simple approach to obtain the representation would be to extract all the bounding boxes and feed them through the network. However, this is computationally prohibitive, and we instead propose a method which reuses the feature grid present in ResNet50 models  illustrated in Figure 3.
Consider an image of dimensions , indicating the height, width, and number of channels of the image, respectively. A fully convolutional ResNet50 maps this image to a set of features of dimensions . Given a bounding box whose center is
we represent it by the vectorof size
. This approach is conceptually similar to max pooling as used in Fast-RCNN. Given the computational efficiency and the fact that the effective receptive field is concentrated at the center , we chose this simple alternative.
Final loss function.
We combine the losses using positive coefficients and as
This formulation enables a study of the benefits of each of the losses and leads to practical recommendations.
Headroom analysis using ImageNet.
. This data source provides a vast quantity of labeled images and should help the model improve the performance on tasks which require fine-grained detail of specific object classes. In particular, we consider an affine map of the representation extracted by the network, followed by a softmax layer and a corresponding cross-entropy resulting in, where
is a hyperparameter balancing the impact of this additional loss.
4 Experimental setup
4.1 Architectures and training details
Unless otherwise specified, all experiments are performed on a ResNet50 V2 
with batch normalization. For the shot prediction function, we use a LSTM with 256 hidden units. We parameterize the critic functionas a bilinear form. All frames are augmented using the same policy as 
, using random cropping, left-right flipping and color distortions. The coordinates of object bounding boxes are recalculated accordingly. All models are trained using batch size 512 for 120K iterations of stochastic gradient descent with a momentum constant of. The learning rate starts as and decreases by a factor of 10 after 90k and 110k training steps. When cotraining, we train for 100k iterations and decrease the learning rate after 70k, 85k and 95k iterations. Shots and frames are sampled using the same method as : for each video, we sample a sequence of four shots, and we sample eight frames from each shot.
The coefficients and weigh the loss contributions in Equation 3. We set following , and , although we found that a wider range of values leads to the same performance (cf. Figure 6 in the Appendix).
The experiments on cotraining use group normalization  with weight standardization , instead of batch normalization, for a fair comparison to . When cotraining, we sample at every step a batch from each dataset — we compute the three-level loss (3) on the sampled videos, and the classification log-loss on the sampled ImageNet images. Cotrained models train with batch size of 512 for videos and 2048 for images for 100k iterations, using the learning rate schedule described above. Images are preprocessed using the inception crop function from .
We train on videos from the YT8M dataset and cotrain with ImageNet . The videos are sampled at Hz and we run the detector, a MobileNet , with a single shot multi box detector , trained on OpenImagesV4 . The detector runs at ms per frame on a V100 GPU. Table 6 in the appendix shows how often common objects are detected in the video frames. As the detector has been trained on OpenImagesV4, we use its 600 category label space for constructing positive and negative pairs for . We use the feature grid from the ResNet block 4 to construct representations for objects in a frame and limit the number of objects in each frame to a maximum of . We discard objects with detection score below , which accounts for approximately % of the detected objects. Figure 5 shows a histogram of the detection scores. Finally, given that the YT8M dataset is a dynamic dataset, our video training set contains those videos still available in May 2020, for a total of million training and one million validation videos. The baselines were re-trained on this new dataset..
|Transitive Invariance ||YouTube 100k||Tracklets||44.2||35.0||61.8||43.4|
|MT ||ImageNet & SoundNet||Tracklets||59.2||51.9||78.9||55.8|
|Boxes and labels at random||YT8M||None||60.3||55.2||78.0||55.0|
|Boxes at random coordinates||YT8M||Detector||63.4||57.5||81.1||59.7|
|Distilling from ImageNet||YT8M||Classifier||63.1||59.6||81.6||57.0|
|Also predict cross entropy||YT8M||Detector||64.9||60.5||81.3||60.5|
We evaluate two aspects of the learned representations: How well they transfer to novel classification tasks, and how robust are the resulting classifiers to distribution shifts.
The main objective of this work is learning image representations that transfer well to novel, previously unseen tasks. To empirically validate our approach, we report the results on the Visual Task Adaptation Benchmark (VTAB), a suite of 19 image classification tasks . The tasks are organized into three categories, Natural
containing scene understanding tasks (CLEVR-dist, CLEVR-count, dSPRITES-orient, dSPRITES-pos, sNORB-azimuth, sNORB-elevation, DMLab, KITTI). For more details please refer to.
We consider transfer learning in the low data regime, when each task has only 1000 labeled samples available. The evaluation protocol is the same as in [68, 42, 79, 60]: for each dataset we (i) train on samples and validate on samples using our learned model as initialization, (ii) sweep over two learning rates (, ) and two learning rate schedules (10K steps with decay every 3K, or 2.5K steps with decay every 750), and then (iii) pick the learning rate and learning rate schedule according to the highest validation accuracy and retrain the model using all samples. We report statistical significance at the level on a Welch’s two sided
-test based on 12 independent runs. The error bars in the diagrams indicate bootstrapped 95% confidence intervals.
As discussed in Section 3, we were guided by the intuition that the model should learn to be more invariant to natural augmentations. We thus expect our model to be more robust and generalize better to out-of-distribution (OOD) images.
We follow two recent OOD studies [29, 11] and evaluate robustness as accuracy on a suite of 8 datasets measuring various robustness aspects. These datasets are defined on the ImageNet label space: (1) ImageNet-A  measures the accuracy on samples from the web that were adversarial to a ResNet50 trained from scratch on ImageNet. (2) ImageNet-C  measures the accuracy on samples from ImageNet under perturbations such as blur, pixelation, and compression artifacts. (3) ImageNet-V2  presents a new test set for the ImageNet dataset. (4) ObjectNet  consists of images collected by crowd sourcing, where participants were asked to photograph objects in unusual poses and unusual backgrounds. (5-8) ImageNet-Vid, ImageNet-Vid-pm-k, YT-BB-Robust, and YT-BB-Robust-pm-k present frames from video sequences . We measure both accuracy of the anchor frame, denoted as anchor accuracy, and worst-case accuracy in the 20 neighboring frames, denoted as pm-k
We also evaluate our models on the texture-shape data set from . Our method uses a contrastive loss to learn specifically from objects. Learning with our loss encourages objects in different appearances to have similar representations. As such, we hypothesize that our models have higher shape bias, compared to texture bias.  provide a dataset to measure the texture-shape bias. The test set consists of images whose texture has been stylized. Each image has a label according to its shape, and a label according to the stylization of its texture. We report the fraction of correct predictions based on shape, as proposed by the authors. For further details we refer to the paper .
Table 1 shows our results on the Visual Task Adaptation Benchmark (VTAB). We observe statistically significant improvements over the baseline  which demonstrate the benefits of supplementing the self-supervised hierarchy with object level supervision. The detailed results are presented in Figure 4.
Rows 1 and 2 in Table 1 compare against two prior works on representation learning from videos: Transitive Invariance (TI)  and Multi-task Self-Supervised Visual Learning (MT) . TI uses context based self-supervision together with tracking in videos to formulate a pretext task for representation learning and row 1 shows the performance of their pre-trained VGG-16 checkpoint. MT uses a variety of pretext tasks, including motion segmentation, coloring and exemplar learning  and row 2 shows the performance of their ResNet101 (up to block 3) checkpoint.
Ablation 1: Randomizing the location and the class.
The object level loss is made possible through additional supervision provided via an object detector pre-trained on OpenImagesV4. The detector contributes to representation learning by annotating object positions and object category labels in video frame and here we ablate these two sources: (i) We evaluate the contribution (1) from knowing the class of an object, but not its coordinates, and (2) when neither the class nor the location are known. The results are detailed in Table 1. Randomizing both the label and the coordinates of the objects destroys all signal from the detector. Row Boxes and labels at random shows the results of this ablation and we observe that the performance is below the VIVI baseline, as expected. In contrast, when we randomize the object locations, but maintain the correct labels, we obtain an improvement over the baseline (row boxes at random). Interestingly, the VTAB score on structured datasets, %, equals the accuracy where both the class and location are known.
Ablation 2: Frame-level labels from a ImageNet-pretrained model.
We further investigate the effectiveness of knowing frame-level labels by obtaining soft-labels using an ImageNet-pretrained model, effectively distilling the ImageNet model on YT8M frames . Its performance is noted in Table 1, row distilling from ImageNet. Interestingly, this distilled model scores higher in natural datasets, but lower in structured datasets than the proposed method. These differences show how various upstream signals affect different downstream tasks differently.
Ablation 3: Distilling the object detector.
We distill a ResNet50 on YT8M where the training instances are cropped objects and the labels assigned by the object detector. The distilled ResNet50 achieves a score of 57.1% VTAB score compared to % of the proposed method. At the same time, using a non-pretrained ResNet of the same capacity achieves % when trained on 1000 downstream labels. Hence, the detector clearly provides a strong training signal, but it can be exploited to a higher degree by coupling it with a self-supervised loss as in the proposed method.
Ablation 4: Semi-supervised learning.
One can also utilize the detector to label the frames and use the labeled data as additional training data . To this end, we apply a linear classifier on the bounding box representations in order to classify the object as one of 600 OpenImagesV4 classes. The predictions of the object detector are treated as ground truth labels and a binary cross-entropy loss is added to the loss in Equation 3. This approach increases the VTAB score from % to %. We also investigated using this loss as a replacement for in Equation 3. However, this performed worse, scoring %, which highlights the advantage of the contrastive formulation.
|Our method ()||40.4|
|Only object level ()||39.3|
performance on the MS-COCO dataset using various pre-trained backbones.
Effect of the contrastive loss.
Lastly, we present a diagnostic for our training procedure. is designed to embed objects of the same class closer together. We verify whether this is indeed the case, in comparison to VIVI, by measuring the fraction of nearest neighbors per object that belong to the same category. Table 2 shows the progression of this metric during training. Our method results in significantly more nearest neighbors belonging to the same class as the query object. This verifies that our loss function and training procedure achieve the desired outcome.
Evaluation on detection.
Our model learns from videos at the object level. It is natural to expect that a ResNet50 backbone pre-trained using our method will perform well when fine-tuned for downstream object detection. To this end, we fine-tune a RetinaNet architecture  on the MS COCO object detection dataset . Images are rescaled and randomly cropped to
during training. We train the model for 60 epochs with an initial learning rate ofand batch size 256.
Results are shown in table 3. Pre-training using our method improves upon the VIVI baseline by % mAP points. Training on only the object level loss is % mAP point behind using all three levels of the hierarchy. These results suggest that the learned representations are indeed more object centric and that learning from all three levels combined yields representations more effective for downstream object detection.
Co-training with ImageNet.
Table 4 shows the resulting accuracies on VTAB when cotraining with ImageNet. Compared to the cotrained VIVI baseline, our method with its object-level loss increases the VTAB score from % to %. This increase in accuracy is modest in comparison to those in Table 1. We argue that ImageNet is a clean curated dataset whereas YT8M is noisy. Adding cotraining with clean ImageNet improves the accuracy on natural datasets from % to %. It is not surprising that adding more noisy supervision, at the object level, does not give massive gains in this setting. We repeat the experiment with a higher capacity ResNet50. Again we observe modest, but statistically significant, improvements over VIVI. The largest improvement is on the structured datasets, which increase from % to %. These experiments highlight an interesting dichotomy between natural and structured subsets of VTAB: learning with ImageNet yields improvements on natural datasets, while using the detector yields improvements for structured datasets.
|3x wider ResNet-50|
Table table 5 presents the classification accuracies on the eight robustness datasets. To get predictions in the ImageNet label space, we fine tune our learned representation and report results in row fine tuning. Our method compares favorably to the baseline on all datasets, which confirms the intuition that extending the video-shot-frame hierarchy to objects results in more robust image representations. The robustness results for the cotrained models are presented in Table 5, row cotraining. As expected, the results improve across all datasets. The final two columns of Table 5 note the delta between anchor accuracy and pm-k accuracy, where in three out of four cases our method scores a lower (better) delta.
We evaluate our models on the texture-shape data set from . As the evaluation is done using the ImageNet label space, we use the same models that we evaluated on the robustness datasets. First, we evaluate the models that trained an additional linear layer on the ImageNet training set. The VIVI model, using only the video-shot-frame hierarchy, scores shape fraction on the provided dataset. Using our method to learn from the video-shot-frame-object hierarchy, the shape fraction increases to . A higher shape fraction indicates a better model, as the network has higher relative accuracy according to the shape of the object. Similarly, cotrained models improve from to when using our method to learn from objects in video. These results indicate a promising direction for future research.
We have presented a hierarchy, videos-shots-frames-objects, to learn representations from video at multiple levels of granularity. Through our method, the learned representations transfer better to downstream image classification tasks and exhibit higher accuracy on out-of-distribution datasets. We identify three aspects for future research.
A taxonomy for learning transferable representations.
Our results show that using different learning signals present in videos benefits transfer learning to Natural, Specialized or Structured image classification tasks in a specific manner. We consider our work part of a larger line of research that creates a taxonomy for indexing the learning methods and their effect on transfer learning to specific datasets, similarly to , which outlined a taxonomy of multi modal learning. To give examples: We have evaluated our method using the VTAB benchmark, consisting of Natural, Specialized and Structured image classification tasks. Using the noisy videos from YT8M mainly improves transferability to Specialized and Structured tasks. Using the clean images from ImageNet improves transferability to Natural tasks. Our method, which receives implicit supervision from OpenImagesV4, shows highest improvement on Natural tasks. Thus using different sources of supervision improves the transferability to different tasks. We believe that understanding how different data and learning methods improve the performance on different groups of datasets is a central research question in transfer learning today, and that this work contributes towards this grand challenge by providing insight into the benefits of learning from uncurated video data.
Learning about objects without labels.
Our method uses an off-the-shelf detector to identify the objects. As the detector was trained on labeled data, learning at the object level of the hierarchy uses implicit supervision. Contemporary literature focused on other self-supervised methods to improve learning from video. For example, one could derive signals about objects using optical flow or keypoint detection [35, 36]. Combining these ideas in our paradigm of learning in the hierarchy might provide a useful research direction.
Learning about entire videos instead of image representations.
Our method shows improved results concerning transfer learning and robustness of image models for single images. This improvement raises the question how these results will translate to video understanding. Recently, there has been interest in video recognition  and video action localization . We look forward to testing our learning methods on these tasks.
Improved robustness from learning about objects.
We have shown how our method results in more robust image classifiers. This observation suggests that learning about objects, invariant to other parts of the images, improves robustness. Several computer vision tasks concern objects. Therefore, we suggest that having object centered representation will contribute to developments in robustness.
-  TensorFlow Object Detection API. Tensorflow hub: Open images v4 with ssd. https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1. Accessed: 2020-07-22.
-  Philip Bachman, R. Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, 2019.
-  Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems, 2019.
Lucas Beyer, Xiaohua Zhai, Avital Oliver, and Alexander Kolesnikov.
S4L: self-supervised semi-supervised learning.In International Conference on Computer Vision, 2019.
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze.
Deep clustering for unsupervised learning of visual features.In European Conference on Computer Vision, 2018.
-  Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. arXiv, abs/2002.05709, 2020.
-  Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv, abs/2003.04297, 2020.
-  Ekin Dogus Cubuk, Barret Zoph, Dandelion Mané, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. arXiv, abs/1805.09501, 2018.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li.
Imagenet: A large-scale hierarchical image database.
Conference on Computer Vision and Pattern Recognition, 2009.
-  Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009.
-  Josip Djolonga, Jessica Yung, Michael Tschannen, Rob Romijnders, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Matthias Minderer, Alexander D’Amour, Dan Moldovan, Sylvan Gelly, Neil Houlsby, Xiaohua Zhai, and Mario Lucic. On robustness and transferability of convolutional neural networks. arXiv, abs/2007.08558, 2020.
-  Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In International Conference on Computer Vision, 2015.
-  Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In International Conference on Computer Vision, 2017.
-  Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In International Conference on Learning Representations, 2017.
-  Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, 2019.
-  Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin A. Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on Pattern Analysis and Machine Intelligence, 2016.
-  Alexey Dosovitskiy, Jost Tobias Springenberg, Martin A. Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, 2014.
Basura Fernando, Hakan Bilen, Efstratios Gavves, and Stephen Gould.
Self-supervised video representation learning with odd-one-out networks.In Conference on Computer Vision and Pattern Recognition, 2017.
-  Ruohan Gao, Dinesh Jayaraman, and Kristen Grauman. Object-centric representation learning from unlabeled videos. In Asian Conference on Computer Vision, 2016.
-  Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations, 2019.
-  Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations, 2018.
-  Ross B. Girshick. Fast R-CNN. In International Conference on Computer Vision, 2015.
-  Daniel Gordon, Kiana Ehsani, Dieter Fox, and Ali Farhadi. Watching the world go by: Representation learning from unlabeled videos. arXiv, abs/2003.07990, 2020.
-  Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. Unsupervised feature learning from temporal data. In International Conference on Learning Representations, 2015.
-  Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. arXiv, abs/2003.01460, 2020.
-  Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In International Conference on Computer Vision, 2019.
-  Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. arXiv, abs/1911.05722, 2019.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, 2016.
-  Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv, abs/1807.01697, 2018.
-  Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. arXiv, abs/1907.07174, 2019.
-  Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
-  Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. arXiv, abs/1503.02531, 2015.
-  R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019.
-  Phillip Isola, Daniel Zoran, Dilip Krishnan, and Edward H. Adelson. Learning visual groups from co-occurrences in space and time. arXiv, abs/1511.06811, 2015.
-  Allan Jabri, Andrew Owens, and Alexei A. Efros. Space-time correspondence as a contrastive random walk. arXiv, abs/2006.14613, 2020.
-  Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Self-supervised learning of interpretable keypoints from unlabelled videos. In Conference on Computer Vision and Pattern Recognition, 2020.
-  Dinesh Jayaraman, Frederik Ebert, Alexei A. Efros, and Sergey Levine. Time-agnostic prediction: Predicting predictable video frames. In International Conference on Learning Representations, 2019.
-  Dinesh Jayaraman and Kristen Grauman. Slow and steady feature analysis: Higher order temporal coherence in video. In Conference on Computer Vision and Pattern Recognition, 2016.
-  Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset. arXiv, abs/1705.06950, 2017.
-  Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv, abs/2004.11362, 2020.
-  Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, 2014.
-  Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Large scale learning of general visual representations for transfer. arXiv, abs/1912.11370, 2019.
-  Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In Conference on Computer Vision and Pattern Recognition, 2019.
-  Zihang Lai and Weidi Xie. Self-supervised video representation learning for correspondence flow. In British Machine Vision Conference, 2019.
-  Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. In International Conference on Computer Vision, 2017.
-  Hyodong Lee, Joonseok Lee, Joe Yue-Hei Ng, and Paul Natsev. Large scale video representation learning via relational graph clustering. In Conference on Computer Vision and Pattern Recognition, 2020.
-  Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In International Conference on Computer Vision, 2017.
-  Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In European Conference on Computer Vision, 2014.
-  Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: single shot multibox detector. In European Conference on Computer Vision, 2016.
-  Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard S. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2016.
-  Michaël Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In International Conference on Learning Representations, 2016.
-  Matthias Minderer, Chen Sun, Ruben Villegas, Forrester Cole, Kevin P. Murphy, and Honglak Lee. Unsupervised learning of object structure and dynamics from videos. In Advances in Neural Information Processing Systems, 2019.
-  Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. arXiv, abs/1912.01991, 2019.
-  Ishan Misra, C. Lawrence Zitnick, and Martial Hebert. Shuffle and learn: Unsupervised learning using temporal order verification. In European Conference on Computer Vision, 2016.
Hossein Mobahi, Ronan Collobert, and Jason Weston.
Deep learning from temporal coherence in video.
International Conference on Machine Learning, 2009.
-  Mehdi Noroozi, Hamed Pirsiavash, and Paolo Favaro. Representation learning by learning to count. In International Conference on Computer Vision, 2017.
-  Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan L. Yuille. Weight standardization. arXiv, abs/1903.10520, 2019.
-  Vignesh Ramanathan, Kevin D. Tang, Greg Mori, and Fei-Fei Li. Learning temporal embeddings for complex video analysis. In International Conference on Computer Vision, 2015.
-  Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? In International Conference on Machine Learning, 2019.
-  Google Research. Github: Task adaptation. https://github.com/google-research/task_adaptation. Accessed: 2020-07-22.
-  Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Conference on Computer Vision and Pattern Recognition, 2018.
Florian Schroff, Dmitry Kalenichenko, and James Philbin.
Facenet: A unified embedding for face recognition and clustering.In Conference on Computer Vision and Pattern Recognition, 2015.
-  Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. A systematic framework for natural perturbations from videos. arXiv, abs/1906.02168, 2019.
-  Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Conference on Computer Vision and Pattern Recognition, 2016.
-  Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. In International Conference on Machine Learning, 2015.
-  Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition, 2015.
-  Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv, abs/1906.05849, 2019.
-  Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Neil Houlsby, Sylvain Gelly, and Mario Lucic. Self-supervised learning of video-induced visual invariances. In Conference on Computer Vision and Pattern Recognition, 2020.
-  Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv, abs/1807.03748, 2018.
-  Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In International Conference on Computer Vision, 2015.
-  Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In European Conference on Computer Vision, 2018.
-  Xiaolong Wang, Kaiming He, and Abhinav Gupta. Transitive invariance for self-supervised visual representation learning. In International Conference on Computer Vision, 2017.
-  Xiaolong Wang, Allan Jabri, and Alexei A. Efros. Learning correspondence from the cycle-consistency of time. In Conference on Computer Vision and Pattern Recognition, 2019.
-  Donglai Wei, Joseph J. Lim, Andrew Zisserman, and William T. Freeman. Learning and using the arrow of time. In Conference on Computer Vision and Pattern Recognition, 2018.
-  Laurenz Wiskott and Terrence J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 2002.
-  Yuxin Wu and Kaiming He. Group normalization. International Journal Computer Vision, 2020.
-  Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, and Qixiang Ye. Video playback rate perception for self-supervised spatio-temporal representation learning. In Conference on Computer Vision and Pattern Recognition, 2020.
-  Amir Roshan Zamir, Alexander Sax, William B. Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Conference on Computer Vision and Pattern Recognition, 2018.
-  Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, and Neil Houlsby. The visual task adaptation benchmark. arXiv, abs/1910.04867, 2019.
Richard Zhang, Phillip Isola, and Alexei A. Efros.
Colorful image colorization.In European Conference on Computer Vision, 2016.
-  Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In International Conference on Computer Vision, 2019.
-  Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David F. Fouhey, Ivan Laptev, and Josef Sivic. Cross-task weakly supervised learning from instructional videos. In Conference on Computer Vision and Pattern Recognition, 2019.
-  Will Y Zou, Andrew Y Ng, and Kai Yu. Unsupervised learning of visual invariance with temporal coherence. In Advances in Neural Information Processing Systems, 2011.
-  Will Y. Zou, Andrew Y. Ng, Shenghuo Zhu, and Kai Yu. Deep learning of invariant features via simulated fixations in video. In Advances in Neural Information Processing Systems, 2012.
Appendix A Statistics on the annotated Yt8m
This section shows statistics on the YT8M, annotated with the object detector . We annotate each frame of YT8M with the object detector and store the five objects with highest detection scores. Our method relies on objects recurring multiple times in a video. The method works better when objects occur multiple times in the selected frames. Therefore, Table 6 displays statistics for objects that occur in most videos. For each object, we count how often the object recurs in the frames sampled with the strategy from . For example, in percent of videos, an object with class Footwear occurs. Each of those videos has, on average, instances of the Footwear class.
We discard objects with a low detection score. Figure 5 shows the fraction of boxes below a certain threshold. All methods in this work use a threshold of , which discards about 3 percent of the objects. We experimented with higher thresholds, but this resulted in worse VTAB scores.
|Label name||Videos (%)||Recurrence|
Appendix B Sensitivity to hyperparameters
Our experiments use three important hyperparameters. We used the validations sets from the VTAB benchmark to set the hyperparameters. This section shows the sweeps we make so one can judge the sensitivity for each hyperparameter. Figure 6 shows the search for hyperparameter from Equation 3. Figure 7 shows the search for a positive coefficient to include the cross entropy loss in the experiment for Table 1, row Also predict cross entropy. Figure 8 shows the search for a positive coefficient for the cross entropy loss when learning from the soft labels from ImageNet for Table 1, row Distilling from ImageNet.
|ResNet50 from scratch||42.1||26.9||65.8||43.6||37.7||11.0||23.0||40.2||13.3||3.9||59.3||63.1||84.8||41.6||73.5||54.8||38.5||35.8||37.3||87.9||20.9||36.9||36.9|
|Transitive Invariance ||44.2||35.0||61.8||43.3||54.9||7.1||38.3||28.2||32.3||7.4||77.0||63.1||84.1||50.0||50.0||61.7||12.7||35.0||59.3||86.1||21.1||29.2||41.6|
|ImageNet supervised (3x)||69.5||72.6||83.8||59.5||85.6||61.0||69.6||88.8||90.9||37.4||75.0||78.0||95.7||82.5||78.9||61.4||64.6||45.3||60.5||93.2||32.9||36.6||81.5|
|Detector backbone ||61.6||60.0||80.4||53.5||84.3||38.2||48.4||77.4||58.6||25.2||88.1||70.6||94.0||73.4||83.5||58.2||42.8||47.8||46.4||73.4||39.4||42.9||77.4|
|Rand boxes and labels||60.3||55.1||80.0||54.9||75.7||28.2||49.6||76.7||53.1||14.7||88.0||71.3||93.8||74.0||80.8||60.8||55.7||34.5||50.7||94.0||37.2||37.0||69.5|
|Distilling from ImNet||63.1||59.6||81.5||56.9||81.3||35.4||58.4||75.5||54.3||24.9||87.5||73.0||95.4||75.8||82.0||61.0||50.0||47.0||50.7||89.3||36.5||41.8||79.3|
|Include CE loss||64.9||60.5||81.3||60.5||83.9||38.9||55.2||76.2||59.2||20.7||89.3||71.2||94.7||77.0||82.3||63.7||65.6||50.8||52.8||94.2||34.7||40.4||81.7|
|Video only||Filter half of detections||61.9||57.6||80.3||56.4||79.7||31.8||50.4||78.3||59.2||14.5||89.2||71.0||94.3||74.9||81.0||58.4||50.6||48.4||52.2||90.9||34.9||41.6||74.4|
|VIVI (3x) ||70.5||72.6||83.8||62.0||88.0||54.3||69.4||89.6||87.9||34.6||84.2||72.9||95.3||82.3||84.9||58.3||74.5||46.3||67.8||92.1||33.1||44.1||80.2|