Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey

02/16/2019 ∙ by Longlong Jing, et al. ∙ CUNY Law School 32

Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the main components and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 7

page 8

page 12

page 14

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivation

Due to the powerful ability to learn different levels of general visual features, deep neural networks have been used as the basic structure to many computer vision applications such as object detection [1, 2, 3], semantic segmentation [4, 5, 6], image captioning [7]

, etc. The models trained from large-scale image datasets like ImageNet are widely used as the pre-trained models and fine-tuned for other tasks for two main reasons: (1) the parameters learned from large-scale diverse datasets provide a good starting point, therefore, networks training on other tasks can converge faster, (2) the network trained on large-scale datasets already learned the hierarchy features which can help to reduce over-fitting problem during the training of other tasks, especially when datasets of other tasks are small or training labels are scarce.

The performance of deep convolutional neural networks (ConvNets) greatly depends on their capability and the amount of training data. Different kinds of network architectures were developed to increase the capacity of network models, and larger and larger datasets were collected these days. Various networks including AlexNet [8], VGG [9], GoogLeNet [10], ResNet [11], and DenseNet [12] and large scale datasets such as ImageNet [13], OpenImage [14] have been proposed to train very deep ConvNets. With the sophisticated architectures and large-scale datasets, the performance of ConvNets keeps breaking the state-of-the-arts for many computer vision tasks [15, 1, 4, 7, 16].

However, collection and annotation of large-scale datasets are time-consuming and expensive. As one of the most widely used datasets for pre-training very deep 2D convolutional neural networks (2DConvNets), ImageNet [13] contains about million labeled images covering classes while each image is labeled by human workers with one class label. Compared to image datasets, collection and annotation of video datasets are more expensive due to the temporal dimension. The Kinetics dataset [17], which is mainly used to train ConvNets for video human action recognition, consists of videos belonging to categories and each video lasts around seconds. It took many Amazon Turk workers a lot of time to collect and annotate a dataset at such a large scale.

Fig. 1: The general pipeline of self-supervised learning. The visual feature is learned through the process of training ConvNets to solve a pre-defined pretext task. After self-supervised pretext task training finished, the learned parameters serve as a pre-trained model and are transferred to other downstream computer vision tasks by fine-tuning. The performance on these downstream tasks is used to evaluate the quality of the learned features. During the knowledge transfer for downstream tasks, the general features from only the first several layers are unusually transferred to downstream tasks.

To avoid time-consuming and expensive data annotations, many self-supervised methods were proposed to learn visual features from large-scale unlabeled images or videos without using any human annotations. To learn visual features from unlabeled data, a popular solution is to propose various pretext tasks for networks to solve, while the networks can be trained by learning objective functions of the pretext tasks and the features are learned through this process. Various pretext tasks have been proposed for self-supervised learning including colorizing grayscale images

[18]

, image inpainting

[19], image jigsaw puzzle [20], etc. The pretext tasks share two common properties: (1) visual features of images or videos need to be captured by ConvNets to solve the pretext tasks, (2) pseudo labels for the pretext task can be automatically generated based on the attributes of images or videos.

The general pipeline of self-supervised learning is shown in Fig. 1. During the self-supervised training phase, a pre-defined pretext task is designed for ConvNets to solve, and the pseudo labels for the pretext task are automatically generated based on some attributes of data. Then the ConvNet is trained to learn object functions of the pretext task. After the self-supervised training finished, the learned visual features can be further transferred to downstream tasks (especially when only relatively small data available) as pre-trained models to improve performance and overcome over-fitting. Generally, shallow layers capture general low-level features like edges, corners, and textures while deeper layers capture task related high-level features. Therefore, visual features from only the first several layers are transferred during the supervised downstream task training phase.

1.2 Term Definition

To make this survey easy to read, we first define the terms used in the remaining sections.

  • Human-annotated label: Human-annotated labels refer to labels of data that are manually annotated by human workers.

  • Pseudo label: Pseudo labels are automatically generated labels based on data attributes for pretext tasks.

  • Pretext Task: Pretext tasks are pre-designed tasks for networks to solve, and visual features are learned by learning objective functions of pretext tasks.

  • Downstream Task: Downstream tasks are computer vision applications that are used to evaluate the quality of features learned by self-supervised learning. These applications can greatly benefit from the pre-trained models when training data are scarce. In general, human-annotated labels are needed to solve the downstream tasks. However, in some applications, the downstream task can be the same as the pretext task without using any human-annotated labels.

  • Supervised Learning: Supervised learning indicates learning methods using data with fine-grained human-annotated labels to train networks.

  • Semi-supervised Learning: Semi-supervised learning refers to learning methods using a small amount of labeled data in conjunction with a large amount of unlabeled data.

  • Weakly-supervised Learning: Weakly supervised learning refers to learning methods to learn with coarse-grained labels or inaccurate labels. The cost of obtaining weak supervision labels is generally much cheaper than fine-grained labels for supervised methods.

  • Unsupervised Learning: Unsupervised learning refers to learning methods without using any human-annotated labels.

  • Self-supervised Learning: Self-supervised learning is a subset of unsupervised learning methods. Self-supervised learning refers to learning methods in which ConvNets are explicitly trained with automatically generated labels. This review only focuses on self-supervised learning methods for visual feature learning with ConvNets in which the features can be transferred to multiple different computer vision tasks.

Since no human annotations are needed to generate pseudo labels during self-supervised training, very large-scale datasets can be used for self-supervised training. Trained with these pseudo labels, self-supervised methods achieved promising results and the gap with supervised methods in performance on downstream tasks becomes smaller. This paper provides a comprehensive survey of deep ConvNets-based self-supervised visual feature learning methods. The key contributions of this paper are as follows:

  • To the best of our knowledge, this is the first comprehensive survey about self-supervised visual feature learning with deep ConvNets which will be helpful for researchers in this field.

  • An in-depth review of recently developed self-supervised learning methods and datasets.

  • Quantitative performance analysis and comparison of the existing methods are provided.

  • A set of possible future directions for self-supervised learning is pointed out.

2 Formulation of Different Learning Schemas

Based on the training labels, visual feature learning methods can be grouped into the following four categories: supervised, semi-supervised, weakly supervised, and unsupervised. In this section, the four types of learning methods are compared and key terminologies are defined.

2.1 Supervised Learning Formulation

For supervised learning, given a dataset X, for each data in X, there is a corresponding human-annotated label . For a set of labeled training data

, the training loss function is defined as:

(1)

Trained with accurate human-annotated labels, the supervised learning methods obtained break-through results on different computer vision applications [8, 4, 1, 16]. However, data collection and annotation usually are expensive and may require special skills. Therefore, semi-supervised, weakly supervised, and unsupervised learning methods were proposed to reduce the cost.

2.2 Semi-Supervised Learning Formulation

For semi-supervised visual feature learning, given a small labeled dataset and a large unlabeled dataset , for each data in X, there is a corresponding human-annotated label . For a set of labeled training data and unlabeled training data , the training loss function is defined as:

(2)

where the is a task-specific function to represent the relation between each unlabeled training data with the labeled dataset .

2.3 Weakly Supervised Learning Formulation

For weakly supervised visual feature learning, given a dataset X, for each data in X, there is a corresponding coarse-grained label . For a set of training data , the training loss function is defined as:

(3)

Since the cost of weak supervision is much lower than the fine-grained label for supervised methods, large-scale datasets are relatively easier to obtain. Recently, several papers proposed to learn image features from web collected images using hashtags as category labels [21, 22], and obtained very good performance [21].

2.4 Unsupervised Learning Formulation

Unsupervised learning refers to learning methods that do not need any human-annotated labels. This type of methods including fully unsupervised learning methods in which the methods do not need any labels at all, as well as self-supervised learning methods in which networks are explicitly trained with automatically generated pseudo labels without involving any human annotation.

2.4.1 Self-supervised Learning

Recently, many self-supervised learning methods for visual feature learning have been developed without using any human-annotated labels [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 33, 35]. Some papers refer to this type of learning methods as unsupervised learning [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]. Compared to supervised learning methods which require a data pair and while is annotated by human labors, self-supervised learning also trained with data along with its pseudo label while is automatically generated for a pre-defined pretext task without involving any human annotation. The pseudo label can be generated by using attributes of images or videos such as the context of images [20, 36, 18, 19], or by traditional hand-designed methods [49, 50, 51].

Given a set of training data , the training loss function is defined as:

(4)

As long as the pseudo labels are automatically generated without involving human annotations, then the methods belong to self-supervised learning. Recently, self-supervised learning methods have achieved great progress. This paper focuses on the self-supervised learning methods that mainly designed for visual feature learning, while the features have the ability to be transferred to multiple visual tasks and to perform new tasks by learning from limited labeled data. This paper summarizes these self-supervised feature learning methods from different perspectives including network architectures, commonly used pretext tasks, datasets, and applications, etc.

3 Common Deep Network Architectures

No matter the categories of learning methods, they share similar network architectures. This section reviews common architectures for learning both image and video features.

3.1 Architectures for Learning Image Features

Various 2DConvNets have been designed for image feature learning. Here, five milestone architectures for image feature learning including AlexNet [8], VGG [9], GoogLeNet [10], ResNet [11], and DenseNet [12] are reviewed.

3.1.1 AlexNet

AlexNet obtained a big improvement in the performance of image classification on ImageNet dataset compared to the previous state-of-the-art methods [8]. With the support of powerful GPUs, AlexNet which has million parameters were trained on ImageNet with million images. As shown in Fig. 2, the architecture of AlexNet has layers in which are convolutional layers and

are fully connected layers. The ReLU is applied after each convolutional layers.

% of the network parameters come from the fully connected layers. With this scale of parameters, the network can easily be over-fitting. Therefore, different kinds of techniques are applied to avoid over-fitting problem including data augmentation, dropout, and normalization.

Fig. 2: The architecture of AlexNet [8]. The numbers indicate the number of channels of each feature map. Figure is reproduced based on AlexNet [8].

3.1.2 Vgg

VGG is proposed by Simonyan and Zisserman and won the first place for ILSVRC 2013 competition [9]. Simonyan and Zisserman proposed various depth of networks, while the 16-layer VGG is the most widely used one due to its moderate model size and its superior performance. The architecture of VGG-16 is shown in Fig. 3. It has

convolutional layers belong to five convolution blocks. The main difference between VGG and AlexNet is that AlexNet has large convolution stride and large kernel size while all the convolution kernels in VGG have same small size (

) and small convolution stride (). The large kernel size leads to too many parameters and large model size, while the large convolution stride may cause the network to miss some fine features in the lower layers. The smaller kernel size makes the training of very deep convolution neural network feasible while still reserving the fine-grained information in the network.

Fig. 3: The architecture of VGG [9]. Figure is reproduced based on VGG [9].

3.1.3 ResNet

VGG demonstrated that deeper networks are possible to obtain better performance. However, deeper networks are more difficult to train due to two problems: gradient vanishing and gradient explosion. ResNet is proposed by He et al. to use the skip connection in convolution blocks by sending the previous feature map to the next convolution block to overcome the gradient vanishing and gradient explosion [11]. The details of the skip connection are shown in Fig. 4. With the skip connection, training of very deep neural networks on GPUs becomes feasible.

Fig. 4: The architecture of Residual block [11]. The identity mapping can effectively reduce gradient vanishing and explosion which make the training of very deep network feasible. Figure is reproduced based on ResNet [11].

In ResNet [11], He et al. also evaluated networks with different depths for image classification. Due to its smaller model size and superior performance, ResNet is often used as the base network for other computer vision tasks. The convolution blocks with skip connection also widely used as the basic building blocks.

3.1.4 GoogLeNet

Fig. 5: The architecture of Inception block [10]. Figure is reproduced based on GoogLeNet [10].

GoogLeNet, a -layer deep network, is proposed by Szegedy et al. which won ILSVRC-2014 challenge with a top-5 test accuracy of % [10]. Compared to previous work that to build a deeper network, Szegedy et al. explored to build a wider network in which each layer has multiple parallel convolution layers. The basic block of GoogLeNet is inception block which consists of parallel convolution layers with different kernel sizes and followed by convolution for dimension reduction purpose. The architecture for the inception block of GoogLeNet is shown in Fig. 5. With a carefully crafted design, they increased the depth and width of the network while keeping the computational cost constant.

3.1.5 DenseNet

Fig. 6: The architecture of the Dense Block proposed in DenseNet [12]. Figure is reproduced based on [12].

Most of the networks including AlexNet, VGG, and ResNet follow a hierarchy architecture. The images are fed to the network and features are extracted by different layers. The shallow layers extract low-level general features, while the deep layers extract high-level task-specific features [52]. However, when a network goes deeper, the deeper layers may suffer from memorizing the low-level features needed by the network to accomplish the task.

To alleviate this problem, Huang et al. proposed the dense connection to send all the features before a convolution block as the input to the next convolution block in the neural network [12]. As shown in Fig. 6, the output features of all the previous convolution blocks serve as the input to the current block. In this way, the shallower blocks focus on the low-level general features while the deeper blocks can focus on the high-level task-specific features.

3.2 Architectures for Learning Video Features

To extract both spatial and temporal information from videos, several architectures have been designed for video feature learning including 2DConvNet-based methods [53], 3DConvNet-based methods [16], and LSTM-based methods [54]. The 2DConvNet-based methods apply 2DConvNet on every single frame and the image features of multiple frames are fused as video features. The 3DConvNet-based methods employ 3D convolution operation to simultaneously extract both spatial and temporal features from multiple frames. The LSTM-based methods employ LSTM to model long term dynamics within a video. This section briefly summarizes these three types of architectures of video feature learning.

3.2.1 Two-Stream Network

Fig. 7: The general architecture of the two-stream network which including one spatial stream and one temporal stream. Figure is reproduced based on [53].

Videos generally are composed of various numbers of frames. To recognize actions in a video, networks are required to capture appearance features as well as temporal dynamics from frame sequences. As shown in Fig. 7, a two-stream 2DConvNet-based network is proposed by Simonyan and Zisserman for human action recognition, while using a 2DConvNet to capture spatial features from RGB stream and another 2DConvNet to capture temporal features from optical flow stream [53]. Optical flow encodes boundary of moving objects, therefore, the temporal stream ConvNet is relatively easier to capture the motion information within the frames.

Experiments showed that the fusion of the two streams can significantly improve action recognition accuracy. Later, this work has been extended to multi-stream network [55, 56, 57, 58, 59] to fuse features from different types of inputs such as dynamic images [60] and difference of frames [61].

3.2.2 Spatiotemporal Convolutional Neural Network

3D convolution operation was first proposed in 3DNet [62] for human action recognition. Compared to 2DConvNets which individually extract the spatial information of each frame and then fuse them together as video features, 3DConvNets are able to simultaneously extract both spatial and temporal features from multiple frames.

C3D [16] is a VGG-like 11-layer 3DConvNet designed for human action recognition. The network contains convolutional layers, and fully connected layers. All the kernels have the size of , the convolution stride is fixed to pixel. Due to its powerful ability of simultaneously extracting both spatial and temporal features from multiple frames, the network achieved state-of-the-art on several video analysis tasks including human action recognition [63], action similarity labeling [64]

, scene classification

[65], and object recognition in videos [66].

The input of C3D is consecutive RGB frames where the appearance and temporal cues from 16-frame clips are extracted. However, the paper of long-term temporal convolutions (LTC) [67] argues that, for the long-lasting actions, 16 frames are insufficient to represent whole actions which last longer. Therefore, larger numbers of frames are employed to train 3DConvNets and achieved better performance than C3D [67, 68].

With the success of applying 3D convolution on video analysis tasks, various 3DConvNet architectures have been proposed [69, 70, 71]. Hara et al. proposed 3DResNet by replacing all the 2D convolution layers in ResNet with 3D convolution layers and showed comparable performance with the state-of-the-art performance on action recognition task on several datasets [70].

3.2.3 Recurrent Neural Network

Fig. 8: The architecture of long-term recurrent convolutional networks (LRCN) [54]. LSTM is employed to model the long term temporal information within a frame sequence. Figure is reproduced based on [54].

Due to the ability to model the temporal dynamics within a sequence, recurrent neural networks (RNN) are often applied to videos as ordered frame sequences. Compared to standard RNN

[72]

, long short term memory (LSTM) uses memory cells to store, modify, and access internal states, to better model the long-term temporal relationships within video frames

[73].

Based on the advantage of the LSTM, Donahue et al. proposed long-term recurrent convolutional networks (LRCN) for human action recognition [54]. The framework of the LRCN is shown in Fig. 8

. The LSTM is sequentially applied to the features extracted by ConvNets to model the temporal dynamics in the frame sequence. With the LSTM to model a video as frame sequences, this model is able to explicitly model the long-term temporal dynamics within a video. Later on, this model is extended to a deeper LSTM for action recognition

[74, 75], video captioning [76], and gesture recognition tasks [77].

3.3 Summary of ConvNet Architectures

Deep ConvNets have demonstrated great potential in various computer vision tasks. And the visualization of the image and video features has shown that these networks truly learned meaningful features that required by the corresponding tasks [52, 78, 79, 80]. However, one common drawback is that these networks can be easily over-fit when training data are scarce since there are over millions of parameters in each network.

Take 3DResNet for an example, the performance of an -layer 3DResNet on UCF101 action recognition dataset [63] is % when trained from scratch. However, with a supervised pre-trained model on the large-scale Kinetics dataset ( videos of classes) with human-annotated class labels and then fine-tuned on UCF101 dataset, the performance can increase to %. Pre-trained models on large-scale datasets can speed up the training process and improve the performance on relatively small datasets. However, the cost of collecting and annotating large-scale datasets is very expensive and time-consuming.

In order to obtain pre-trained models from large-scale datasets without expensive human annotations, many self-supervised learning methods were proposed to learn image and video features from pre-designed pretext tasks. The next section describes the general pipeline of the self-supervised image and video feature learning.

4 Commonly used Pretext and Downstream Tasks

Fig. 9: Self-supervised visual feature learning schema. The ConvNet is trained by minimizing errors between pseudo labels and predictions of the ConvNet. Since the pseudo labels are automatically generated, no human annotations are involved during the whole process.

Most existing self-supervised learning approaches follow the schema shown in Fig 9. Generally, a pretext task is defined for ConvNets to solve and visual features can be learned through the process of accomplishing this pretext task. The pseudo labels for pretext task can be automatically generated without human annotations. ConvNet is optimized by minimizing the error between the prediction of ConvNet and the pseudo labels . After the training on the pretext task is finished, ConvNet models that can capture visual features for images or videos are obtained.

4.1 Learning Visual Features from Pretext Tasks

To relieve the burden of large-scale dataset annotation, a pretext task is generally designed for networks to solve while pseudo labels for the pretext task are automatically generated based on data attributes. Many pretext tasks have been designed and applied for self-supervised learning such as foreground object segmentation [81], image inpainting [19], clustering [44], image colorization [82], temporal order verification [40], visual audio correspondence verification [25], and so on. Effective pretext tasks ensure that semantic features are learned through the process of accomplishing the pretext tasks.

Take image colorization as an example, image colorization is a task to colorize gray-scale images into colorful images. To generate realistic colorful images, networks are required to learn the structure and context information of images. In this pretext task, the data

is the gray-scale images which can be generated by performing a linear transformation in RGB images, while the pseudo label

is the RGB image itself. The training pair and can be generated in real time with negligible cost. Self-Supervised learning with other pretext tasks follow a similar pipeline.

4.2 Commonly Used Pretext Tasks

Fig. 10: Categories of pretext tasks for self-supervised visual feature learning: generation-based, context-based, free semantic label-based, and cross modal-based.

According to the data attributes used to design pretext tasks, as shown in Fig. 10, we summarize the pretext tasks into four categories: generation-based, context-based, free semantic label-based, and cross modal-based.

Generation-based Methods: This type of methods learn visual features by solving pretext tasks that involve image or video generation.

  • Image Generation: Visual features are learned through the process of image generation tasks. This type of methods includes image colorization [18]

    , image super resolution

    [15], image inpainting [19], image generation with Generative Adversarial Networks (GANs) [83, 84].

  • Video Generation: Visual features are learned through the process of video generation tasks. This type of methods includes video generation with GANs [85, 86] and video prediction [37].

Context-based pretext tasks: The design of context-based pretext tasks mainly employ the context features of images or videos such as context similarity, spatial structure, temporal structure, etc.

  • Context Similarity: Pretext tasks are designed based on the context similarity between image patches. This type of methods includes image clustering-based methods [34, 44], and graph constraint-based methods [43].

  • Spatial Context Structure: Pretext tasks are used to train ConvNets are based on the spatial relations among image patches. This type of methods includes image jigsaw puzzle [20, 87, 88, 89], context prediction [41], and geometric transformation recognition [36, 28], etc.

  • Temporal Context Structure: The temporal order from videos is used as supervision signal. The ConvNet is trained to verify whether the input frame sequence in correct order [40, 90] or to recognize the order of the frame sequence [39].

Free Semantic Label-based Methods: This type of pretext tasks train networks with automatically generated semantic labels. The labels are generated by traditional hard-code algorithms [50, 51] or by game engines [30]. The pretext tasks include moving object segmentation [91, 81], contour detection [47, 30], relative depth prediction [92], and etc.

Cross Modal-based Methods: This type of pretext tasks train ConvNets to verify whether two different channels of input data are corresponding to each other. This type of methods includes Visual-Audio Correspondence Verification [93, 25], RGB-Flow Correspondence Verification [24], and egomotion [94, 95].

4.3 Commonly Used Downstream Tasks for Evaluation

To evaluate the quality of the learned image or video features by self-supervised methods, the learned parameters by self-supervised learning are employed as pre-trained models and then fine-tuned on downstream tasks such as image classification, semantic segmentation, object detection, and action recognition etc. The performance of the transfer learning on these high-level vision tasks demonstrates the generalization ability of the learned features. If ConvNets of self-supervised learning can learn general features, then the pre-trained models can be used as a good starting point for other vision tasks that require capturing similar features from images or videos.

Image classification, semantic segmentation, and object detection usually are used as the tasks to evaluate the generalization ability of the learned image features by self-supervised learning methods, while human action recognition in videos is used to evaluate the quality of video features obtained from self-supervised learning methods. Below are brief introductions of the commonly used high-level tasks for visual feature evaluation.

4.3.1 Semantic Segmentation

Semantic segmentation, the task of assigning semantic labels to each pixel in images, is of great importance in many applications such as autonomous driving, human-machine interaction, and robotics. The community has recently made promising progress and various networks have been proposed such as Fully Convolutional Network (FCN) [4], DeepLab [5], PSPNet [6] and datasets such as PASCAL VOC [96], CityScape [97], ADE20K [98].

Fig. 11: The framework of the Fully Convolutional Neural Network proposed for semantic segmentation [4]. Figure is reproduced based on [4].

Among all these methods, FCN [4] is a milestone work for semantic segmentation since it started the era of applying fully convolution network (FCN) to solve this task. The architecture of FCN is shown in Fig. 11. 2DConvNet such as AlexNet, VGG, ResNet is used as the base network for feature extraction while the fully connected layer is replaced by transposed convolution layer to obtain the dense prediction. The network is trained end-to-end with pixel-wise annotations.

When using semantic segmentation as downstream task to evaluate the quality of image features learned by self-supervised learning methods, the FCN is initialized with the parameters trained with the pretext task and fine-tuned on the semantic segmentation dataset, then the performance on the semantic segmentation task is evaluated and compared with that of other self-supervised methods.

4.3.2 Object Detection

Object Detection, a task of localizing the position of objects in images and recognizing the category of the objects, is also very import for many computer vision applications such as autonomous driving, robotics, scene text detection and so on. Recently, many datasets such as MSCOCO [99] and OpenImage [14] have been proposed for object detection and many ConvNet-based models [1], [2], [3], [100], [101], [102], [103], [104] have been proposed and obtained great performance.

Fast-RCNN [2] is a two-stage network for object detection. The framework of Fast-RCNN is shown in Fig. 12. Object proposals are generated based on feature maps produced by a convolution neural network, then these proposals are fed to several fully connected layers to generate the bounding box of objects and the categories of these objects.

Fig. 12: The pipeline of the Fast-RCNN for object detection. Figure is reproduced based on [3].

When using object detection as downstream task to evaluate the quality of the self-supervised image features, networks that trained with the pretext task on unlabeled large data are served as the pre-trained model for the Fast-RCNN [2] and then fine-tuned on object detection datasets, then the performance on the object detection task is evaluated to demonstrate the generalization ability of self-supervised learned features.

4.3.3 Image Classification

Image Classification is a task of recognizing the category of objects in each image. Many networks have been designed for this task such as AlexNet [8], VGG [9], ResNet [11], GoogLeNet [10], DenseNet [12], etc. Usually, only one class label is available for each image although the image may contains different classes of objects.

When choosing image classification as a downstream task to evaluate the quality of image features learned from self-supervised learning methods, the self-supervised learned model is applied on each image to extract features which then are used to train a classifier such as Support Vector Machine (SVM)

[105]. The classification performance on testing data is compared with other self-supervised models to evaluate the quality of the learned features.

4.3.4 Human Action Recognition

Human action recognition is a task of identifying what people doing in videos for a list of pre-defined action classes. Generally, videos in human action recognition datasets contain only one action in each video [63, 106, 17]. Both the spatial and temporal features are needed to accomplish this task.

The action recognition task is often used to evaluate the quality of video features learned by self-supervised learning methods. The network is first trained on unlabeled video data with pretext tasks, then it is fine-tuned on action recognition datasets with human annotations to recognize the actions. The testing performance on action recognition task is compared with other self-supervised learning methods to evaluate the quality of the learned features.

4.3.5 Qualitative Evaluation

In addition to these quantitative evaluations of the learned features, there are also some qualitative visualization methods to evaluate the quality of self-supervised learning features. Three methods are often used for this purpose: kernel visualization, feature map visualization, and image retrieval visualization

[44, 36, 28, 41].

Kernel Visualization: Qualitatively visualize the kernels of the first convolution layer learned with the pretext tasks and compare the kernels from supervised models. The similarity of the kernels learned by supervised and self-supervised models are compared to indicate the effectiveness of self-supervised methods [44, 28].

Feature Map Visualization: Feature maps are visualized to show the attention of networks. Larger activation represents the neural network pays more attention to the corresponding region in the image. Feature maps are usually qualitatively visualized and compared with that of supervised models [36, 28].

Nearest Neighbor Retrieval: In general, images with similar appearance usually are closer in the feature space. The nearest neighbor method is used to find the top nearest neighbors from the feature space of the features learned by the self-supervised learned model [40, 41, 43].

Dataset Data Type Size Synthetic # classes Label
ImageNet [13] Image million images Object category label
Places [107] Image million images scene categories label
Places365 [108] Image million images scene categories label
SUNCG [109] Image images depth, volumetric data
MNIST [110] Image images Digit class label
SVHN [111] Image Images Digit class label
CIFAR10 [112] Image Images Object category label
STL-10 [113] Image Images Object category label
PASCAL VOC [96] Image images Category label, bounding box, segmentation mask
YFCC100M [114] Image/Video million media data Hashtags
SceneNet RGB-D [115] Video million images Depth, Instance Segmentation, Optical Flow
Moment-in-Time [116] Video million 3-second videos Video category class
Kinetics [17] Video million 10-second videos Human action class
AudioSet [117] Video million 10-second videos Audio event class
KITTI [118] Video videos Data captured by various sensors are available
UCF101 [63] Video videos Human action class
HMDB51 [106] Video videos Human action class
TABLE I: Summary of commonly used image and video datasets. Note that image datasets can be used to learn image features, while video datasets can be used to learn both image and video features.

5 Datasets

This section summarizes the commonly used image and video datasets for training and evaluating of self-supervised visual feature learning methods. Self-supervised learning methods can be trained with images or videos by discarding human-annotated labels, therefore, any datasets that collected for supervised learning can be used for self-supervised visual feature learning without using human-annotated labels. The evaluation of the quality of learned features is normally conducted by fine-tuned on high-level vision tasks with relatively small datasets (normally with accurate labels) such as video action recognition, object detection, semantic segmentation, etc. It is worth noting that networks use these synthetic datasets for visual feature learning are considered as self-supervised learning in this paper since labels of synthetic datasets are automatically generated by game engines and no human annotations are involved. Table I summarizes the commonly used image and video datasets.

5.1 Image Datasets

  • ImageNet: The ImageNet dataset [13] contains

    million images uniformly distributed into

    classes and is organized according to the WordNet hierarchy. Each image is assigned with only one class label. ImageNet is the most widely used dataset for self-supervised image feature learning.

  • Places: The Places dataset [107] is proposed for scene recognition and contains more than million images covering more than scene categories with more than images per category.

  • Places365: The Places365 is the 2nd generation of the Places database which is built for high-level visual understanding tasks, such as scene context, object recognition, action and event prediction, and theory-of-mind inference [108]. There are more than million images covering more than classes and to training images per class.

  • SUNCG: The SUNCG dataset is a large synthetic 3D scene repository for indoor scenes which consists of over different scenes with manually created realistic room and furniture layouts [109]. The synthetic depth, object level semantic labels, and volumetric ground truth are available.

  • MNIST: The MNIST is a dataset of handwritten digits consisting of images while images belong to training set and the rest images are for testing [110]. All digits have been size-normalized and centered in fixed-size images.

  • SVHN: SVHN is a dataset for recognizing digits and numbers in natural scene images which obtained from house numbers from Google Street View images [111]. The dataset consists of over images and all digits have been resized to a fixed resolution of pixels.

  • CIFAR10: The CIFAR10 dataset is a collection of tiny images for image classification task [112]. It consists of images of size that covers different classes. The classes include airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The dataset is balanced and there are images of each class.

  • STL-10: The STL-10 dataset is specifically designed for developing unsupervised feature learning [113]. It consists of labeled training images, testing images, and unlabeled images covering classes which include airplane, bird, car, cat, deer, dog, horse, monkey, ship, and truck.

  • PASCAL Visual Object Classes (VOC): The VOC 2,012 dataset [96] contains object categories including vehicles, household, animals, and other: aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, TV/monitor, bird, cat, cow, dog, horse, sheep, and person. Each image in this dataset has pixel-level segmentation annotations, bounding box annotations, and object class annotations. This dataset has been widely used as a benchmark for object detection, semantic segmentation, and classification tasks. The PASCAL VOC dataset is split into three subsets: images for training, images for validation and a private testing [96]. All the self-supervised image representation learning methods are evaluated on this dataset with the three tasks.

5.2 Video Datasets

  • YFCC100M: The Yahoo Flickr Creative Commons Million Dataset (YFCC100M) is a large public multimedia collection from Flickr, consisting of million media data, of which around million are images and million are videos [114]. The statistics on hashtags used in the YFCC100M dataset show that the data distribution is severely unbalanced [119].

  • SceneNet RGB-D: The SceneNet RGB-D dataset is a large indoor synthetic video dataset which consists of million rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses [115]

    . It provides pixel-level annotations for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction

    [115].

  • Moment in Time: The Moment-in-Time dataset is a large balanced and diverse dataset for video understanding [116]. The dataset consists of million video clips that cover classes, and each video lasts around seconds. The average number of video clips for each class is with a median of . The video in this dataset contains videos that capturing visual and/or audible actions, produced by humans, animals, objects or nature [116].

  • Kinetics: The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos [17]. The dataset consists of around video clips covering human action classes with at least video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class.

  • AudioSet: The AudioSet consists of human-labeled 10-second sound clips drawn from YouTube videos covers ontology of audio event classes [117]. The event classes cover a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds. This dataset is mainly used for the self-supervised learning from video and audio consistence [26].

  • KITTI: The KITTI dataset is collected from driving a car around a city which equipped with various sensors including high-resolution RGB camera, gray-scale stereo camera, a 3D laser scanner, and high-precision GPS measurements and IMU accelerations from a combined GPS/IMU system [118]. Videos with various modalities captured by these sensors are available in this dataset.

  • UCF101: The UCF101 is a widely used video dataset for human action recognition [63]. The dataset consists of video clips with more than hours belonging to categories in this dataset. The videos in this dataset have a spatial resolution of pixels and FPS frame rate. This dataset has been widely used for evaluating the performance of human action recognition. In the self-supervised sensorial, the self-supervised models are fine-tuned on the dataset and the accuracy of the action recognition are reported to evaluate the quality of the features.

  • HMDB51: Compared to other datasets, the HMDB51 dataset is a smaller video dataset for human action recognition. There are around video clips in this dataset belong to human action categories [106]. The videos in HMDB51 dataset have pixels spatial resolution and FPS frame rate. In the self-supervised sensorial, the self-supervised models are fine-tuned on the dataset to evaluate the quality of the learned video features.

6 Image Feature Learning

In this section, three groups of self-supervised image feature learning methods are reviewed including generation-based methods, context-based methods, and free semantic label-based methods. A list of the image feature self-supervised learning methods can be found in Table II. Since the cross modal-based methods mainly learn features from videos and most methods of this type can be used for both image and video feature learning, so cross modal-based methods are reviewed in the video feature learning section.

Method Category Code Contribution
GAN [83] Generation Forerunner of GAN
DCGAN [120] Generation Deep convolutional GAN for image generation
WGAN [121] Generation Proposed WGAN which makes the training of GAN more stable
BiGAN [122] Generation Bidirectional GAN to project data into latent space
SelfGAN [123] Multiple Use rotation recognition and GAN for self-supervised learning
ColorfulColorization [18] Generation Posing image colorization as a classification task
Colorization [82] Generation Using image colorization as the pretext task
AutoColor [124] Generation Training ConvNet to predict per-pixel color histograms
Split-Brain [42] Generation Using split-brain auto-encoder as the pretext task
Context Encoder [19] Generation Employing ConvNet to solve image inpainting
CompletNet [125] Generation Employing two discriminators to guarantee local and global consistent
SRGAN [15] Generation Employing GAN for single image super-resolution
SpotArtifacts [126] Generation Learning by recognizing synthetic artifacts in images
ImproveContext [33] Context Techniques to improve context based self-supervised learning methods
Context Prediction [41] Context Learning by predicting the relative position of two patches from an image
Jigsaw [20] Context Image patch Jigsaw puzzle as the pretext task for self-supervised learning
Damaged Jigsaw [89] Multiple Learning by solving jigsaw puzzle, inpainting, and colorization together
Arbitrary Jigsaw [88] Context Learning with jigsaw puzzles with arbitrary grid size and dimension
DeepPermNet [127] Context A new method to solve image patch jigsaw puzzle
RotNet [36] Context Learning by recognizing rotations of images
Boosting [34] Multiple Using clustering to boost the self-supervised learning methods
JointCluster [128] Context Jointly learning of deep representations and image clusters
DeepCluster [44] Context Using clustering as the pretext
ClusterEmbegging [129] Context Deep embedded clustering for self-supervised learning
GraphConstraint [43] Context Learning with image pairs mined with Fisher Vector
Ranking [38] Context Learning by ranking video frames with a triplet loss
PredictNoise [46] Context Learning by mapping images to a uniform distribution over a manifold
MultiTask [32] Multiple Using multiple pretext tasks for self-supervised feature learning
Learning2Count [130] Context Learning by counting visual primitive
Watching Move [81] Free Semantic Label Learning by grouping pixels of moving objects in videos
Edge Detection [81] Free Semantic Label Learning by detecting edges
Cross Domain [81] Free Semantic Label Utilizing synthetic data and its labels rendered by game engines
TABLE II: Summary of self-supervised image feature learning methods based on the category of pretext tasks. Multi-task means the method explicitly or implicitly uses multiple pretext tasks for image feature learning.

6.1 Generation-based Image Feature Learning

Generation-based self-supervised methods for learning image features involve the process of generating images including image generation with GAN (to generate fake images), super-resolution (to generate high-resolution images), image inpainting (to predict missing image regions), and image colorization (to colorize gray-scale images into colorful images). For these tasks, pseudo training labels usually are the images themselves and no human-annotated labels are needed during training, therefore, these methods belong to self-supervised learning methods.

The pioneer work about the image generation-based methods is the Autoencoder

[131] which learns to compress an image into a low-dimension vector which then is uncompressed into the image that closes to the original image with a bunch of layers. With an auto-encoder, networks can reduce the dimension of an image into a lower dimension vector that contains the main information of the original image. The current image generation-based methods follow a similar idea but with different pipelines to learn visual features through the process of image generation.

6.1.1 Image Generation with GAN

Generative Adversarial Network (GAN) is a type of deep generative model that was proposed by Goodfellow et al. [83]. A GAN model generally consists of two kinds of networks: a generator which is to generate images from latent vectors and a discriminator which is to distinguish whether the input image is generated by the generator. By playing the two-player game, the discriminator forces the generator to generate realistic images, while the generator forces the discriminator to improve its differentiation ability. During the training, the two networks are competing against with each other and make each other stronger.

The common architecture for the image generation from a latent variable task is shown in Fig. 13. The generator is trained to map any latent vector sampled from latent space into an image, while the discriminator is forced to distinguish whether the image from the real data distribution or generated data distribution. Therefore, the discriminator is required to capture the semantic features from images to accomplish the task. The parameters of the discriminator can server as the pre-trained model for other computer vision tasks.

Fig. 13: The pipeline of Generative Adversarial Networks [83]. By playing the two-player game, the discriminator forces the generator to generate realistic images, while the generator forces the discriminator to improve its differentiation ability.

Mathematically, the generator is trained to learn a distribution of real word image data to generate realist data that undistinguished from the real data, while the discriminator is trained to distinguish the distribution of the real data and of the data distribution generated by the generator . The min-max game between the generator and the discriminator is formulated as:

(5)

where is the real data, is the generated data.

The discriminator

is trained to maximize the probability for the real data

(that is, ) and minimize the probability for the generated data (that is, ). The generator is trained to generate data that close to real data , so as the output of the discriminator is maximized .

Most of the methods for image generation from random variables do not need any human-annotated labels. However, the main purpose of this type of task is to generate realistic images instead of obtaining better performance on downstream applications. Generally, the inception scores of the generated images are used to evaluate the quality of the generated images

[132, 133]. And only a few methods evaluated the quality of the feature learned by the discriminator on the high-level tasks and compared with others [123, 122, 120].

The adversarial training can help the network to capture the real distribution of the real data and generate realists data, and it has been widely used in computer vision tasks such as image generation [134, 135], video generation [85],[86], super-resolution [15], image translation [136], and image inpainting [125, 19]. When there is no human-annotated label involves, the method falls into the self-supervised learning.

6.1.2 Image Generation with Inpainting

Fig. 14: Qualitative illustration of image inpainting task. Given an image with a missing region (a), a human artist has no trouble inpainting it (b). Automatic inpainting using context encoder proposed in [19] trained with L2 reconstruction loss and adversarial loss is shown in (c). Figure is reproduced based on [19].

Image inpainting is a task of predicting arbitrary missing regions based on the rest of an image. A qualitative illustration of the image inpainting task is shown in Fig. 14. The Fig. 14(a) is an image with a missing region, while the Fig 14(c) is the prediction of networks. To correctly predict missing regions, networks are required to learn the common knowledge including the color and structure of the common objects. Only by knowing this knowledge, networks are able to infer missing regions based on the rest part of the image.

By analogy with auto-encoders, Pathak et al. made the first step to train a ConvNet to generate the contents of an arbitrary image region based on the rest of the image [19]. Their contributions are in two folds: using a ConvNet to tackle image inpainting problem, and using the adversarial loss to help the network generate a realistic hypothesis. Most of the recent methods follow a similar pipeline [125]. Usually, there are two kinds of networks: a generator network is to generate the missing region with the pixel-wise reconstruction loss and a discriminator network is to distinguish whether the input image is real with an adversarial loss. With the adversarial loss, the network is able to generate sharper and realistic hypothesis for the missing image region. Both the two kinds of networks are able to learn the semantic features from images and can be transferred to other computer vision tasks. However, only Pathak et al. [19] studied the performance of transfer learning for the learned parameters of the generator from the image inpainting task.

The generator network which is a fully convolutional network has two parts: encoder and decoder. The input of the encoder is the image that needs to be inpainted and the context encoder learns the semantic feature of the image. The context decoder is to predict the missing region based on this feature. The generator is required to understand the content of the image in order to generate a plausible hypothesis. The discriminator is trained to distinguish whether the input image is the output of the generator. To accomplish the image inpainting task, both networks are required to learn semantic features of images.

6.1.3 Image Generation with Super Resolution

Image super-resolution (SR) is a task of enhancing the resolution of images. With the help of fully convolutional networks, finer and realistic high-resolution images can be generated from low-resolution images. SRGAN is a generative adversarial network for single image super-resolution proposed by Ledig et al. [15]

. The insight of this approach is to take advantage of the perceptual loss which consists of an adversarial loss and a content loss. With the perceptron loss, the SRGAN is able to recover photo-realistic textures from heavily downsampled images and show significant gains in perceptual quality.

There are two networks: one is generator which is to enhance the resolution of the input low-resolution image and the other is the discriminator which is to distinguish whether the input image is the output of the generator. The loss function for the generator is the pixel-wise L2 loss plus the content loss which is the similarity of the feature of the predicted high-resolution image and the high-resolution original image, while the loss for the discriminator is the binary classification loss. Compared to the network that only minimizing the Mean Squared Error (MSE) which generally leads to high peak signal-to-noise ratios but lacking high-frequency details, the SRGAN is able to recover the fine details of the high-resolution image since the adversarial loss pushes the output to the natural image manifold by the discriminator network.

The networks for image super-resolution task are able to learn the semantic features of images. Similar to other GANs, the parameters of the discriminator network can be transferred to other downstream tasks. However, no one tested the performance of the transferred learning on other tasks yet. The quality of the enhanced image is mainly compared to evaluate the performance of the network.

6.1.4 Image Generation with Colorization

Fig. 15: The architecture of image colorization proposed in [18]. The figure is from [18] with author’s permission.

Image colorization is a task of predicting a plausible color version of the photograph given a gray-scale photograph as input. A qualitative illustration of the image colorization task is shown in Fig. 15. To correctly colorize each pixel, networks need to recognize objects and to group pixels of the same part together. Therefore, visual features can be learned in the process of accomplishing this task.

Many deep learning-based colorization methods have been proposed in recent years [18, 137, 138]. A straightforward idea would be to employ a fully convolution neural network which consists of an encoder for feature extraction and a decoder for the color hallucination to colorization. The network can be optimized with L2 loss between the predicted color and its original color. Zhang et al. proposed to handle the uncertainty by posting the task as a classification task and used class-rebalancing to increase the diversity of predicted colors [18]. The framework for image colorization proposed by Zhang et al. is shown in Fig. 15. Trained in large-scale image collections, the method shows great results and fools human on % of the trials during the colorization test.

Some work specifically employs the image colorization task as the pretext for self-supervised image representation learning [82, 18, 124, 42]. After the image colorization training is finished, the features learned through the colorization process are specifically evaluated on other downstream high-level tasks with transfer learning.

6.2 Context-Based Image Feature Learning

The context-based pretext tasks mainly employ the context features of images including context similarity, spatial structure, and temporal structure as the supervision signal. Features are learned by ConvNet through the process of solving the pretext tasks designed based on attributes of the context of images.

6.2.1 Learning with Context Similarity

Fig. 16: The architecture of DeepClustering [44]. The features of images are iteratively clustered and the cluster assignments are used as pseudo-labels to learn the parameters of the ConvNet. The figure is from [44] with author’s permission.

Clustering is a method of grouping sets of similar data in the same clusters. Due to its powerful ability of grouping data by using the attributes of the data, it is widely used in many fields such as machine learning, image processing, computer graphics, etc. Many classical clustering algorithms have been proposed for various applications

[139].

In the self-supervised scenario, the clustering methods mainly employed as a tool to cluster image data. A naive method would be to cluster the image data based on the hand-designed feature such as HOG [140], SIFT [141], or Fisher Vector [49]

. After the clustering, several clusters are obtained while the image within one cluster has a smaller distance in feature space and images from different clusters have a larger distance in feature space. The smaller the distance in feature space, the more similar the image in the appearance in the RGB space. Then a ConvNet can be trained to classify the data by using the cluster assignment as the pseudo class label. To accomplish this task, the ConvNet needs to learn the invariance within one class and the variance among different classes. Therefore, the ConvNet is able to learn semantic meaning of images.

The existing methods about using the clustering variants as the pretext task follow these principals [44, 43, 34, 128, 129]. Firstly, the image is clustered into different clusters which the images from the same cluster have smaller distance and images from different clusters have larger distance. Then a ConvNet is trained to recognize the cluster assignment [44, 34] or to recognize whether two imaged are from same cluster [43]. The pipeline of DeepCluster, a clustering based methods, is shown in Fig. 16. DeepCluster iteratively clusters images with Kmeans and use the subsequent assignments as supervision to update the weights of the network. And it is the current state-of-the-art for the self-supervised image representation learning.

6.2.2 Learning with Spatial Context Structure

Images contain rich spatial context information such as the relative positions among different patches from an image which can be used to design the pretext task for self-supervised learning. The pretext task can be to predict the relative positions of two patches from same image [41], or to recognize the order of the shuffled a sequence of patches from same image [20, 89, 88]. The context of full images can also be used as a supervision signal to design pretext tasks such as to recognize the rotating angles of the whole images [36]. To accomplish these pretext tasks, ConvNets need to learn spatial context information such as the shape of the objects and the relative positions of different parts of an object.

Fig. 17: The visualization of the Jigsaw Image Puzzle [20]. (a) is an image with sampled image patches, (b) is an example of shuffled image patches, and (c) shows the correct order of the sampled patches. Figure is reproduced based on [20].

The method proposed by Doersch et al. is one of the pioneer work of using spatial context cues for self-supervised visual feature learning [41]. Random pairs of image patches are extracted from each image, then a ConvNet is trained to recognize the relative positions of the two image patches. To solve this puzzle, ConvNets need to recognize objects in images and learn the relationships among different parts of objects. To avoid the network learns trivial solutions such as simply using edges in patches to accomplish the task, heavy data augmentation is applied during the training phase.

Following this idea, more methods are proposed to learn image features by solving more difficult spatial puzzles [20, 89, 88, 87, 27]. As illustrated in Fig. 17, one typical work proposed by Noroozi et al. attempted to solve an image Jigsaw puzzle with ConvNet [20]. Fig. 17(a) is an image with sampled image patches, Fig 17(b) is an example of shuffled image patches, and Fig 17(c) shows the correct order of the sampled patches. The shuffled image patches are fed to the network which trained to recognize the correct spatial locations of the input patches by learning spatial context structures of images such as object color, structure, and high-level semantic information.

Given image patches from an image, there are possible permutations and a network is very unlikely to recognize all of them because of the ambiguity of the task. To limit the number of permutations, usually, hamming distance is employed to choose only a subset of permutations among all the permutations that with relative large hamming distance. Only the selected permutations are used to train ConvNet to recognize the permutation of shuffled image patches [88, 89, 35, 20].

The main principle of designing puzzle tasks is to find a suitable task which is not too difficult and not too easy for a network to solve. If it is too difficult, the network may not converge due to the ambiguity of the task or can easily learn trivial solutions if it is too easy. Therefore, a reduction in the search space is usually employed to reduce the difficulty of the task.

6.3 Free Semantic Label-based Image Feature Learning

The free semantic label refers to labels with semantic meanings that obtained without involving any human annotations. Generally, the free semantic labels such as segmentation masks, depth images, optic flows, and surface normal images can be rendered by game engine or generated by hard-code methods. Since these semantic labels are automatically generated, the methods using the synthetic datasets or using them in conjunction with a large unlabeled image or video datasets are considered as self-supervised learning methods.

6.3.1 Learning with Labels Generated by Game Engines

Given models of various objects and layouts of environments, game engines are able to render realistic images and provide accurate pixel-level labels. Since game engines can generate large-scale datasets with negligible cost, various game engines such as Airsim [142] and Carla [143] have been used to generate large-scale synthetic datasets with high-level semantic labels including depth, contours, surface normal, segmentation mask, and optical flow for training deep networks. An example of an RGB image with its generated accurate labels is shown in Fig. 18.

Fig. 18: An example of an indoor scene generated by a game engine [115]. For each synthetic image, the corresponding depth, instance segmentation, and optical flow can be automatically generated by the engine.

Game engines can generate realistic images with accurate pixel-level labels with very low cost. However, due to the domain gap between synthetic and real-world images, the ConvNet purely trained on synthetic images cannot be directly applied to real-world images. To utilize synthetic datasets for self-supervised feature learning, the domain gap needs to be explicitly bridged. In this way, the ConvNet trained with the semantic labels of the synthetic dataset can be effectively applied to real-world images.

To overcome the problem, Ren and Lee proposed an unsupervised feature space domain adaptation method based on adversarial learning [30]. As shown in Fig. 19, the network predicts surface normal, depth, and instance contour for the synthetic images and a discriminator network is employed to minimize the difference of feature space domains between real-world and synthetic data. Helped with adversarial training and accurate semantic labels of synthetic images, the network is able to capture visual features for real-world images.

Fig. 19: The architecture for utilizing synthetic and real-world images for self-supervised feature learning [30]. Figure is reproduced based on [30].

Compared to other pretext tasks in which the pretext tasks implicitly force ConvNets to learn semantic features, this type of methods are trained with accurate semantic labels which explicitly force ConvNets to learn features that highly related to the objects in images.

6.3.2 Learning with Labels Generated by Hard-code programs

Applying hard-code programs is another way to automatically generate semantic labels such as salience, foreground masks, contours, depth for images and videos. With these methods, very large-scale datasets with generated semantic labels can be used for self-supervised feature learning. This type of methods generally has two steps: (1) label generation by employing hard-code programs on images or videos to obtain labels, (2) train ConvNets with the generated labels.

Various hard-code programs have been applied to generate labels for self-supervised learning methods include methods for foreground object segmentation [81], edge detection [47], and relative depth prediction [92]. Pathak et al. proposed to learn features by training a ConvNet to segment foreground objects in each frame of a video while the label is the mask of moving objects in videos [81]. Li et al. proposed to learn features by training a ConvNet for edge prediction while labels are motion edges obtained from flow fields from videos [47]. Jing et al. proposed to learn features by training a ConvNet to predict relative scene depths while the labels are generated from optical flow [92].

No matter what kind of labels used to train ConvNets, the general idea of this type of methods is to distill knowledge from hard-code detector. The hard-code detector can be edge detector, salience detector, relative detector, etc. As long as no human-annotations are involved through the design of detectors, then the detectors can be used to generate labels for self-supervised training.

Compared to other self-supervised learning methods, the supervision signal in these pretext tasks is semantic labels which can directly drive the ConvNet to learn semantic features. However, one drawback is that the semantic labels generated by hard-code detector usually are very noisy which need to specifically cope with.

7 Video Feature Learning

This section reviews the self-supervised methods for learning video features, as listed in Table III, they can be categorized into four classes: generation-based methods, context-based methods, free semantic label-based methods, and cross modal-based methods.

Since video features can be obtained by various kinds of networks including 2DConvNet, 3DConvNet, and LSTM combined with 2DConvNet or 3DConvNet. When 2DConvNet is employed for video self-supervised feature learning, then the 2DConvNet is able to extract both image and video features after the self-supervised pretext task training finished.

Mehtod SubCategory Code Contribution
VideoGAN [85] Generation Forerunner of video generation with GAN
MocoGAN [86] Generation Decomposing motion and content for video generation with GAN
TemporalGAN [144] Generation Decomposing temporal and image generator for video generation
Video Colorization [145] Generation Employing video colorization as the pretext task
Un-LSTM [37] Generation Forerunner of video prediction with LSTM
ConvLSTM [146] Generation Employing Convolutional LSTM for video prediction
MCNet [147] Generation Disentangling motion and content for video prediction
LSTMDynamics [148] Generation Learning by predicting long-term temporal dynamic in videos
Video Jigsaw [87] Context Learning by jointly reasoning about spatial and temporal context
Transitive [31] Context Learning inter and intra instance variations with a Triplet loss
3DRotNet [28] Context Learning by recognizing rotations of video clips
CubicPuzzles [27] Context Learning by solving video cubic puzzles
ShuffleLearn [40] Context Employing temporal order verification as the pretext task
LSTMPermute [149] Context Learning by temporal order verification with LSTM
OPN [39] Context Using frame sequence order recognition as the pretext task
O3N [29] Context

Learning by identifying odd video sequences

ArrowTime [90] Context Learning by recognizing the arrow of time in videos
TemporalCoherence [150] Context Learning with the temporal coherence of features of frame sequence
FlowNet [151] Cross Modal Forerunner of optical flow estimation with ConvNet
FlowNet2 [152] Cross Modal Better architecture and better performance on optical flow estimation
UnFlow [153] Cross Modal An unsupervised loss for optical flow estimation
CrossPixel [23] Cross Modal Learning by predicting motion from a single image as the pretext task
CrossModel [24] Cross Modal Optical flow and RGB correspondence verification as pretext task
AVTS [25] Cross Modal Visual and Audio correspondence verification as pretext task
AudioVisual [26] Cross Modal Jointly modeling visual and audio as fused multisensory representation
LookListenLearn [93] Cross Modal Forerunner of Audio-Visual Correspondence for self-supervised learning
AmbientSound [154] Cross Modal Predicting a statistical summary of the sound from a video frame
EgoMotion [155] Cross Modal Learning by predicting camera motion and the scene structure from videos
LearnByMove [94] Cross Modal Learning by predicting the camera transformation from a pairs of images
TiedEgoMotion [95] Cross Modal Learning from ego-motor signals and video sequence
GoNet [156] Cross Modal Jointly learning monocular depth, optical flow and ego-motion estimation from videos
DepthFlow [157] Cross Modal Depth and optical flow learning using cross-task consistency from videos
VisualOdometry [158] Cross Modal An unsupervised paradigm for deep visual odometry learning
ActivesStereoNet [159] Cross Modal End-to-end self-supervised learning of depth from active stereo systems
TABLE III: Summary of self-supervised video feature learning methods based on the category of pretext tasks.

7.1 Generation-based Video Feature Learning

Learning from video generation refers to the methods that visual features are learned through the process of video generation while without using any human-annotated labels. This type of methods includes video generation with GAN [85], video colorization [145] and video prediction [37]. For these pretext tasks, the pseudo training label usually is the video itself and no human-annotated labels are needed during training, therefore, these methods belong to self-supervised learning.

7.1.1 Learning from Video Generation

Fig. 20: The architecture of the generator in VideoGan for video generation with GAN proposed in [85]. The figure is from [85] with author’s permission.

After GAN-based methods obtained breakthrough results in image generation, researchers employed GAN to generate videos [86, 85, 144]. One pioneer work of video generation with GAN is VideoGAN [85], and the architecture of the generator network is shown in Fig. 20. To model the motion of objects in videos, a two-stream network is proposed for video generation while one stream is to model the static regions in in videos as background and another stream is to model moving object in videos as foreground [85]. Videos are generated by the combination of the foreground and background streams. The underline assumption is that each random variable in the latent space represents one video clip. This method is able to generate videos with dynamic contents. However, Tulyakov et al. argues that this assumption increases difficulties of the generation, instead, they proposed MocoGAN to use the combination of two subspace to represent a video by disentangling the context and motions in videos [86]. One space is context space which each variable from this space represents one identity, and another space is motion space while the trajectory in this space represents the motion of the identity. With the two sub-spaces, the network is able to generate videos with higher inception score.

The generator learns to map latent vectors from latent space into videos, while discriminator learns to distinguish the real world videos with generated videos. Therefore, the discriminator needs to capture the semantic features from videos to accomplish this task. When no human-annotated labels are used in these frameworks, they belong to the self-supervised learning methods. After the video generation training on large-scale unlabeled dataset finished, the parameters of discriminator can be transferred to other downstream tasks [85].

7.1.2 Learning from Video Colorization

Temporal coherence in videos refers to that consecutive frames within a short time have similar coherent appearance. The coherence of color can be used to design pretext tasks for self-supervised learning. One way to utilize color coherence is to use video colorization as a pretext task for self-supervised video feature learning.

Video colorization is a task to colorize gray-scale frames into colorful frames. Vondrick et al. proposed to constrain colorization models to solve video colorization by learning to copy colors from a reference frame [145]. Given the reference RGB frame and a gray-scale image, the network needs to learn the internal connection between the reference RGB frame and gray-scale image to colorize it.

Another perspective is to tackle video colorization by employing a fully convolution neural network. Tran et al. proposed an U-shape convolution neural network for video colorization [160]. The network is an encoder-decoder based 3DConvNet. The input of the network is a clip of grayscale video clip, while the output if a colorful video clip. The encoder is a bunch of 3D convolution layers to extract features while the decoder is a bunch of 3D deconvolution layers to generate colorful video clips from the extracted feature.

The color coherence in videos is a strong supervision signal. However, only a few work studied to employ it for self-supervised video feature learning [145]. More work can be done by studying using color coherence as a supervision signal for self-supervised video feature learning.

7.1.3 Learning from Video Prediction

Fig. 21: The architecture for video prediction task proposed by [147]. Figure is reproduced based on [147].

Video prediction is a task of predicting future frame sequences based on a limited number of frames of a video. To predict future frames, network must learn the change in appearance within a given frame sequence. The pioneer of applying deep learning for video prediction is Un-LSTM [37]. Due to the powerful ability of modeling long-term dynamic in videos, LSTM is used in both the encoder and decoder [37].

Many methods have been proposed for video prediction [37, 147, 161, 162, 163, 164, 165]. Since its superior ability to model temporal dynamics, most of them use LSTM or LSTM variant to encode temporal dynamics in videos or to infer the future frames [37, 147, 146, 164, 165]. These methods can be employed for self-supervised feature learning without using human-annotations.

Most of the frameworks follow the encoder-decoder pipeline in which the encoder to model spatial and temporal features from the given video clips and the decoder to generate future frames based on feature extracted by encoder. Fig. 21 shows a pipeline of MCnet proposed by Villegas et al. in [147]. McNet is built on Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for video prediction. It has two encoders, one is Content Encoder to capture the spatial layout of an image, and the other is Motion Encoder to model temporal dynamics within video clips. The spatial features and temporal features are concatenated to feed to the decoder to generate the next frame. By separately modeling temporal and spatial features, this model can effectively generate future frames recursively.

Video prediction is a self-supervised learning task and the learned features can be transferred to other tasks. However, no work has been done to study the generalization ability of features learned by video prediction. Generally, The Structural Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR) are employed to evaluate the difference between the generated frame sequence and the ground truth frame sequence.

7.2 Temporal Context-based Learning

Fig. 22: The pipeline of Shuffle and Learn [40]. The network is trained to verify whether the input frames are in correct temporal order. Figure is reproduced based on [40].

Videos consist of various lengths of frames which have rich spatial and temporal information. The inherent temporal information within videos can be used as supervision signal for self-supervised feature learning. Various pretext tasks have been proposed by utilizing temporal context relations including temporal order verification [40, 29, 90] and temporal order recognition [39, 27]. Temporal order verification is to verify whether a sequence of input frames is in correct temporal order, while temporal order recognition is to recognize the order of a sequence of input frames.

As shown in Fig. 22, Misra et al. proposed to use the temporal order verification as the pretext task to learn image features from videos with 2DConvNet [40] which has two main steps: (1) The frames with significant motions are sampled from videos according to the magnitude of optical flow, (2) The sampled frames are shuffled and fed to the network which is trained to verify whether the input data is in correct order. To successfully verify the order of the input frames, the network is required to capture the subtle difference between the frames such as the movement of the person. Therefore, semantic features can be learned through the process of accomplishing this task. The temporal order recognition tasks use networks of similar architecture.

However, the methods usually suffer from a massive dataset preparation step. The frame sequences that used to train the network are selected based on the magnitude of the optical flow, and the computation process of optical flow is expensive and slow. Therefore, more straightforward and time-efficiency methods are needed for self-supervised video feature learning.

7.3 Cross Modal-based Learning

Cross modal-based learning methods usually learn video features from the correspondence of multiple data streams including RGB frame sequence, optical flow sequence, audio data, and camera pose.

In addition to rich temporal and spatial information in videos, optical flow sequence can be generated to specifically indicate the motion in videos, and the difference of frames can be computed with negligible time and space-time complexity to indicate the boundary of the moving objects. Similarly, audio data also provide a useful hint about the content of videos. Based on the type of data used, these methods fall into three groups: (1) methods that learn features by using the RGB and optical flow correspondence [24, 23], (2) methods that learn features by utilizing the video and audio correspondence [25, 93], (3) ego-motion that learn by utilizing the correspondence between egocentric video and ego-motor sensor signals [95, 94]. Usually, the network is trained to recognize if the two kinds of input data are corresponding to each other [24, 25], or is trained to learn the transformation between different modalities [94].

7.3.1 Learning from RGB-Flow Correspondence

Optical flow encodes object motions between adjacent frames, while RGB frames contain appearance information. The correspondence of the two types of data can be used to learn general features [23, 24, 151, 152]. This type of pretext tasks include optical flow estimation [151, 152] and RGB and optical flow correspondence verification [23].

Sayed et al. proposed to learn video features by verifying whether the input RGB frames and the optical flow corresponding to each other. Two networks are employed while one is for extracting features from RGB input and another is for extracting features from optical flow input [24]. To verify whether two input data correspond to each other, the network needs to capture mutual information between the two modalities. The mutual information across different modalities usually has higher semantic meaning compared to information which is modality specific. Through this pretext task, the mutual information that invariant to specific modality can be captured by ConvNet.

Optical flow estimation is another type of pretext tasks that can be used for self-supervised video feature learning. Fischer et al. proposed FlowNet which is an end-to-end convolution neural network for optical flow estimation from two consecutive frames [151, 152]. To correctly estimate optical flow from two frames, the ConvNet needs to capture appearance changes of two frames. Optical flow estimation can be used for self-supervised feature learning because it can be automatically generated by simulators such as game engines or by hard-code programs without human annotation.

7.3.2 Learning from Visual-Audio Correspondence

Recently, some researchers proposed to use the correspondence between visual and audio streams to design “Visual-Audio Correspondence” learning task [93, 25, 26, 154].

Fig. 23: The architecture of video and audio correspondence verification task [93].

The general framework of this type of pretext tasks is shown in Fig. 23

. There are two subnetworks: the vision subnetwork and the audio subnetwork. The input of vision subnetwork is a single frame or a stack of image frames and the vision subnetwork learns to capture visual features of the input data. The audio network is a 2DConvNet and the input is the Fast Fourier Transform (FFT) of the audio from the video. Positive data are sampled by extracting video frames and audio from the same time of one video, while negative training data are generated by extracting video frames and audio from different videos or from different times of one video. Therefore, the networks are trained to discover the correlation of video data and audio data to accomplish this task.

Since the inputs of the ConvNets are two kinds of data, the networks are able to learn the two kinds of information jointly by solving the pretext task. The performance of the two networks obtained very good performance on the downstream applications [25].

7.3.3 Ego-motion

With the self-driving car which usually equipped with various sensors, the large-scale egocentric video along with ego-motor signal can be easily collected with very low cost by driving the car in the street. Recently, some researchers proposed to use the correspondence between visual signal and motor signal for self-supervised feature learning [155, 94, 95].

Fig. 24: The architecture of camera pose transformation estimation from egocentric videos [94].

The underline intuition of this type of methods is that a self-driving car can be treated as a camera moving in a scene and thus the egomotion of the visual data captured by the camera is as same as that of the car. Therefore, the correspondence between visual data and egomotion can be utilized for self-supervised feature learning. A typical network of using ego-motor signal is shown in Fig. 24 proposed by Agrawal et al. for self-supervised image feature learning [94]. The inputs to the network are two frames sampled from an egocentric video within a short time. The labels for the network indicate the rotation and translation relation between the two sampled images which can be derived from the odometry data of the dataset. With this task, the ConvNet is forced to identify visual elements that are present in both sampled images.

The ego-motor signal is a type of accurate supervision signal. In addition to directly applying it for self-supervised feature learning, it has also been used for unsupervised learning of depth and ego-motion [155]. All these networks can be used for self-supervised feature learning and transferred for downstream tasks.

8 Performance Comparison

This section compares the performance of image and video feature self-supervised learning methods on public datasets. For image feature self-supervised learning, the performance on downstream tasks including image classification, semantic segmentation, and object detection are compared. For video feature self-supervised learning, the performance on a downstream task which is human action recognition in videos is reported.

8.1 Performance of Image Feature Learning

As described in Section 4.3, the quality of features learned by self-supervised learned models is evaluated by fine-tuning them on downstream tasks such as semantic segmentation, object detection, and image classification. This section summarizes the performance of the existing image feature self-supervised learning methods.

Table IV lists the performance of image classification performance on ImageNet [13] and Places [107] datasets. During self-supervised pretext tasks training, most of the methods are trained on ImageNet dataset with AlexNet as based network without using the category labels. After pretext task self-supervised training finished, a linear classifier is trained on top of different frozen convolutional layers of the ConvNet on the training split of ImageNet and Places datasets. The classification performances on the two datasets are used to demonstrate the quality of the learned features.

As shown in Table IV, the overall performance of the self-supervised models is lower than that of models trained either with ImageNet labels or with Places labels. Among all the self-supervised methods, the DeepCluster [44] achieved the best performance on the two dataset. Three conclusions can be drawn based on the performance from the Table: (1) The features from different layers are always benefited from the self-supervised pretext task training. The performance of self-supervised learning methods is always better than the performance of the model trained from scratch. (2) All of the self-supervised methods perform well with the features from conv3 and conv4 layers while performing worse with the features from conv1, conv2, and conv5 layers. This is probably because shallow layers capture general low-level features, while deep layers capture pretext task-related features. (3) When there is a domain gap between dataset for pretext task training and the dataset of downstream task, the self-supervised learning method is able to reach comparable performance with the model trained with ImageNet labels.

ImageNet Places
Method Pretext Tasks conv1 conv2 conv3 conv4 conv5 conv1 conv2 conv3 conv4 conv5
Places labels [8]
ImageNet labels [8]
Random(Scratch) [8]
ColorfulColorization [18] Generation
BiGAN [122] Generation
SplitBrain [42] Generation
ContextEncoder [19] Context
ContextPrediction [41] Context
Jigsaw [20] Context 18.2
Learning2Count [130] Context 23.3 33.9
DeepClustering [44] Context 32.3 41.0 39.6 38.2 39.2 39.8 34.7
TABLE IV: Linear classification on ImageNet and Places datasets using activations from the convolutional layers of an AlexNet as features. ”Convn” means the linear classifier is trained based on the n-th convolution layer of AlexNet. ”Places Labels” and ”ImageNet Labels” indicate using supervised model trained with human-annotated labels as the pre-trained model.
Method Pretext Tasks Classification Detection Segmentation
ImageNet Labels [8]
Random(Scratch) [8]
ContextEncoder  [19] Generation
BiGAN  [122] Generation
ColorfulColorization  [18] Generation
SplitBrain [42] Generation
RankVideo  [38] Context
PredictNoise  [46] Context
JigsawPuzzle [20] Context
ContextPrediction [41] Context
Learning2Count [130] Context
DeepClustering [44] Context 73.7 55.4 45.1
WatchingVideo  [81] Free Semantic Label
CrossDomain  [30] Free Semantic Label
AmbientSound  [154] Cross Modal
TiedToEgoMotion  [95] Cross Modal
EgoMotion  [94] Cross Modal
TABLE V: Comparison of the self-supervised image feature learning methods on classification, detection, and segmentation on Pascal VOC dataset. ”ImageNet Labels” indicates using supervised model trained with human-annotated labels as the pre-trained model.

In addition to image classification, object detection and semantic segmentation are also used as the downstream tasks to evaluate the quality of the features learned by self-supervised learning. Usually, ImageNet is used for self-supervised pretext task pre-training by discarding category labels, while the AlexNet is used as the base network and fine-tuned on the three tasks. Table V lists the performance of image classification, object detection, and semantic segmentation tasks on the PASCAL VOC dataset. The performance of classification and detection is obtained by testing the model on the test split of PASCAL VOC 2007 dataset, while the performance of semantic segmentation is obtained by testing the model on the validation split of PASCAL VOC 2012 dataset.

As shown in Table V, the performance of the self-supervised models on segmentation and detection dataset are very close to that of the supervised method which is trained with ImageNet labels during pre-training. Specifically, the margins of the performance differences on the object detection and semantic segmentation tasks are less than % which indicate that the learned features by self-supervised learning have a good generalization ability. Among all the self-supervised learning methods, the DeepClustering [44] obtained the best performance on all the tasks.

8.2 Performance of Video Feature Learning

Method Pretext Task UCF101 HMDB51
Kinetics Labels* [70]
VideoGAN [85] Generation
VideoRank [38] Context
ShuffleLearn [40] Context
OPN [29] Context
RL [35] Context
AOT [90] Context
3DRotNet [28] Context 33.7
CubicPuzzle* [27] Context 65.8 33.7
RGB-Flow [24] Cross Modal
PoseAction [48] Cross Modal
TABLE VI: Comparison of the existing self-supervised methods for action recognition on the UCF101 and HMDB51 datasets. * indicates the average accuracy over three splits. ”Kinetics Labels” indicates using supervised model trained with human-annotated labels as the pre-trained model.

For self-supervised video feature learning methods, human action recognition task is used to evaluate the quality of learned features. Various video datasets have been used for self-supervised pre-training, and different network architectures have been used as the base network. Usually after the pretext task pre-training finished, networks are fine-tuned and tested on the commonly used UCF101 and HMDB51 datasets for human action recognition task. Table  VI compares the performance of existing self-supervised video feature learning methods on UCF101 and HMDB51 datasets.

As shown in Table VI, the best performance of the fine-tune results on UCF101 is less than %. However, the supervised model which trained with Kinetics labels can easily obtain an accuracy of more than %. The performance of the self-supervised model is still much lower than the performance of the supervised model. More effective self-supervised video feature learning methods are desired.

8.3 Summary

Based on the results, conclusions can be drawn about the performance and reproducibility of the self-supervised learning methods.

Performance: For image feature self-supervised learning, due to the well-designed pretext tasks, the performance of self-supervised methods are comparable to the supervised methods on some downstream tasks, especially for the object detection and semantic segmentation tasks. The margins of the performance differences on the object detection and semantic segmentation tasks are less than % which indicate that the learned features by self-supervised learning have a good generalization ability. However, the performance of video feature self-supervised learning methods is still much lower than that of the supervised models on downstream tasks. The best performance of the 3DConvNet-based methods on UCF101 dataset is more than % lower than that of the supervised model [70]. The poor performance of 3DCovnNet self-supervised learning methods probably because 3DConvNets usually have more parameters which lead to easily over-fitting and the complexity of video feature learning due to the temporal dimension of the video.

Reproducibility: As we can observe, for the image feature self-supervised learning methods, most of the networks use AlexNet as a base network to pre-train on ImageNet dataset and then evaluate on same downstream tasks for quality evaluation. Also, the code of most methods are released which is a great help for reproducing results. However, for the video self-supervised learning, various datasets and networks have been used for self-supervised pre-training, therefore, it is unfair to directly compare different methods. Furthermore, some methods use UCF101 as self-supervised pre-training dataset which is a relatively small video dataset. With this size of the dataset, the power of a more powerful model such as 3DCovnNet may not be fully discovered and may suffer from server over-fitting. Therefore, larger datasets for video feature self-supervised pre-training should be used.

Evaluation Metrics: Another fact is that more evaluation metrics are needed to evaluate the quality of the learned features in different levels. The current solution is to use the performance on downstream tasks to indicate the quality of the features. However, this evaluation metric does not give insight what the network learned through the self-supervsied pre-training. More evaluation metrics such as network dissection [78] should be employed to analysis the interpretability of the self-supervised learned features.

9 Future Directions

Self-supervised learning methods have been achieving great success and obtaining good performance that close to supervised models on some computer vision tasks. Here, some future directions of self-supervised learning are discussed.

Learning Features from Synthetic Data: A rising trend of self-supervised learning is to train networks with synthetic data which can be easily rendered by game engines with very limited human involvement. With the help of game engines, millions of synthetic images and videos with accuracy pixel-level annotations can be easily generated. With accurate and detailed annotations, various pretext tasks can be designed to learn features from synthetic data. One problem needed to solve is how to bridge the domain gap between synthetic data and real-world data. Only a few work explored self-supervised learning from synthetic data by using GAN to bridge the domain gap [30, 166]. With more available large-scale synthetic data, more self-supervised learning methods will be proposed.

Learning from Web Data: Another rising trend is to train networks with web collected data [167, 22, 168] based on their existing associated tags. With the search engine, millions of images and videos can be downloaded from websites like Flickr and YouTube with negligible cost. In addition to its raw data, the title, keywords, and reviews can also be available as part of the data which can be used as extra information to train networks. With carefully curated queries, the web data retrieved by reliable search engines can be relatively clean. With large-scale web data and their associated metadata, the performance of self-supervised methods may be boosted up. One open problem about learning from web data is how to handle the noise in web data and their associated metadata.

Learning Spatiotemporal Features from Videos: Self-supervised image feature learning has been well studied and the margin of the performance between supervised models and self-supervised models are very small on some downstream tasks such as semantic segmentation and object detection. However, self-supervised video spatiotemporal feature learning with 3DConvNet is not well addressed yet. More effective pretext tasks that specifically designed to learn spatiotemporal features from videos are needed.

Learning with Data from Different Sensors: Most existing self-supervised visual feature learning methods focused on only images or videos. However, if other types of data from different sensors are available, the constraint between different types of data can be used as additional sources to train networks to learn features [155]. The self-driving cars usually are equipped with various sensors including RGB cameras, gray-scale cameras, 3D laser scanners, and high-precision GPS measurements and IMU accelerations. Very large-scale datasets can be easily obtained through the driving, and the correspondence of data captured by different devices can be used as a supervision signal for self-supervised feature learning.

Learning with Multiple Pretext Tasks: Most existing self-supervised visual feature learning methods learn features by training ConvNet to solve one pretext tasks. Different pretext tasks provide different supervision signals which can help the network learn more representative features. Only a few work explored the multiple pretext tasks learning for self-supervised feature learning [32, 30]. More work can be done by studying the multiple pretext task self-supervised feature learning.

10 Conclusion

Self-supervised image feature learning with deep convolution neural network has obtained great success and the margin between the performance of self-supervised methods and that of supervised methods on some downstream tasks becomes very small. This paper has extensively reviewed recently deep convolution neural network-based methods for self-supervised image and video feature learning from all perspectives including common network architectures, pretext tasks, algorithms, datasets, performance comparison, discussions, and future directions etc. The comparative summary of the methods, datasets, and performance in tabular forms clearly demonstrate their properties which will benefit researchers in the computer vision community.

References

  • [1] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, pp. 580–587, 2014.
  • [2] R. Girshick, “Fast R-CNN,” in ICCV, 2015.
  • [3] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in NIPS, pp. 91–99, 2015.
  • [4] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, pp. 3431–3440, 2015.
  • [5] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” TPAMI, 2018.
  • [6] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in CVPR, pp. 2881–2890, 2017.
  • [7] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in CVPR, pp. 3156–3164, 2015.
  • [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, pp. 1097–1105, 2012.
  • [9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ICLR, 2015.
  • [10] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” CVPR, 2015.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, pp. 770–778, 2016.
  • [12] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” in CVPR, vol. 1, p. 3, 2017.
  • [13] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, pp. 248–255, IEEE, 2009.
  • [14] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, and T. Duerig, “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” arXiv preprint arXiv:1811.00982, 2018.
  • [15] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in CVPR.
  • [16] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3D convolutional networks,” in ICCV, 2015.
  • [17] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al., “The kinetics human action video dataset,” arXiv preprint arXiv:1705.06950, 2017.
  • [18] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in ECCV, pp. 649–666, Springer, 2016.
  • [19] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in CVPR, pp. 2536–2544, 2016.
  • [20] M. Noroozi and P. Favaro, “Unsupervised learning of visual representions by solving jigsaw puzzles,” in ECCV, 2016.
  • [21] D. Mahajan, R. B. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten, “Exploring the limits of weakly supervised pretraining,” in ECCV, pp. 185–201, 2018.
  • [22] W. Li, L. Wang, W. Li, E. Agustsson, and L. Van Gool, “Webvision database: Visual learning and understanding from web data,” arXiv preprint arXiv:1708.02862, 2017.
  • [23] A. Mahendran, J. Thewlis, and A. Vedaldi, “Cross pixel optical flow similarity for self-supervised learning,” arXiv preprint arXiv:1807.05636, 2018.
  • [24] N. Sayed, B. Brattoli, and B. Ommer, “Cross and learn: Cross-modal self-supervision,” arXiv preprint arXiv:1811.03879, 2018.
  • [25] B. Korbar, D. Tran, and L. Torresani, “Cooperative learning of audio and video models from self-supervised synchronization,” in NIPS, pp. 7773–7784, 2018.
  • [26] A. Owens and A. A. Efros, “Audio-visual scene analysis with self-supervised multisensory features,” arXiv preprint arXiv:1804.03641, 2018.
  • [27] D. Kim, D. Cho, and I. S. Kweon, “Self-supervised video representation learning with space-time cubic puzzles,” arXiv preprint arXiv:1811.09795, 2018.
  • [28] L. Jing and Y. Tian, “Self-supervised spatiotemporal feature learning by video geometric transformations,” arXiv preprint arXiv:1811.11387, 2018.
  • [29] B. Fernando, H. Bilen, E. Gavves, and S. Gould, “Self-supervised video representation learning with odd-one-out networks,” in CVPR, 2017.
  • [30] Z. Ren and Y. J. Lee, “Cross-domain self-supervised multi-task feature learning using synthetic imagery,” in CVPR, 2018.
  • [31] X. Wang, K. He, and A. Gupta, “Transitive invariance for self-supervised visual representation learning,” in ICCV, 2017.
  • [32] C. Doersch and A. Zisserman, “Multi-task self-supervised visual learning,” in ICCV, 2017.
  • [33] T. N. Mundhenk, D. Ho, and B. Y. Chen, “Improvements to context based self-supervised learning,” in CVPR, 2018.
  • [34] M. Noroozi, A. Vinjimoor, P. Favaro, and H. Pirsiavash, “Boosting self-supervised learning via knowledge transfer,” arXiv preprint arXiv:1805.00385, 2018.
  • [35]

    U. Büchler, B. Brattoli, and B. Ommer, “Improving spatiotemporal self-supervision by deep reinforcement learning,” in

    ECCV, pp. 770–786, 2018.
  • [36] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” in ICLR, 2018.
  • [37] N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised Learning of Video Representations using LSTMs,” in ICML, 2015.
  • [38] X. Wang and A. Gupta, “Unsupervised learning of visual representations using videos,” in ICCV, 2015.
  • [39] H.-Y. Lee, J.-B. Huang, M. Singh, and M.-H. Yang, “Unsupervised representation learning by sorting sequences,” in ICCV, pp. 667–676, IEEE, 2017.
  • [40] I. Misra, C. L. Zitnick, and M. Hebert, “Shuffle and learn: unsupervised learning using temporal order verification,” in ECCV, pp. 527–544, Springer, 2016.
  • [41] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” in ICCV, pp. 1422–1430, 2015.
  • [42] R. Zhang, P. Isola, and A. A. Efros, “Split-brain autoencoders: Unsupervised learning by cross-channel prediction,” in CVPR, 2017.
  • [43] D. Li, W.-C. Hung, J.-B. Huang, S. Wang, N. Ahuja, and M.-H. Yang, “Unsupervised visual representation learning by graph-based consistent constraints,” in ECCV, 2016.
  • [44] M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in ECCV, 2018.
  • [45] E. Hoffer, I. Hubara, and N. Ailon, “Deep unsupervised learning through spatial contrasting,” arXiv preprint arXiv:1610.00243, 2016.
  • [46] P. Bojanowski and A. Joulin, “Unsupervised learning by predicting noise,” arXiv preprint arXiv:1704.05310, 2017.
  • [47] Y. Li, M. Paluri, J. M. Rehg, and P. Dollár, “Unsupervised learning of edges,” CVPR, pp. 1619–1627, 2016.
  • [48] S. Purushwalkam and A. Gupta, “Pose from action: Unsupervised learning of pose features based on motion,” arXiv preprint arXiv:1609.05420, 2016.
  • [49] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” IJCV, vol. 105, no. 3, pp. 222–245, 2013.
  • [50] A. Faktor and M. Irani, “Video segmentation by non-local consensus voting.,” in BMVC, vol. 2, 2014.
  • [51] O. Stretcu and M. Leordeanu, “Multiple frames matching for object discovery in video.,” in BMVC, vol. 1, 2015.
  • [52] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530, 2016.
  • [53] K. Simonyan and A. Zisserman, “Two-Stream Convolutional Networks for Action Recognition in Videos,” in NIPS, 2014.
  • [54] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in CVPR, 2015.
  • [55] C. Feichtenhofer, A. Pinz, and R. Wildes, “Spatiotemporal residual networks for video action recognition,” in NIPS, pp. 3468–3476, 2016.
  • [56] C. Feichtenhofer, A. Pinz, and R. P. Wildes, “Spatiotemporal multiplier networks for video action recognition,” in CVPR.
  • [57] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream network fusion for video action recognition,” in CVPR, pp. 1933–1941, 2016.
  • [58] L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory-pooled deep-convolutional descriptors,” in CVPR, pp. 4305–4314, 2015.
  • [59] L. Wang, Y. Xiong, Z. Wang, and Y. Qiao, “Towards good practices for very deep two-stream convnets,” arXiv preprint arXiv:1507.02159, 2015.
  • [60] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould, “Dynamic image networks for action recognition,” in CVPR, 2016.
  • [61] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: towards good practices for deep action recognition,” in ECCV, 2016.
  • [62] S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” TPAMI, vol. 35, no. 1, pp. 221–231, 2013.
  • [63] K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A dataset of 101 human actions classes from videos in the wild,” CRCV-TR, vol. 12-01, 2012.
  • [64] X. Peng, L. Wang, X. Wang, and Y. Qiao, “Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice,” CVIU, vol. 150, pp. 109–125, 2016.
  • [65] C. Feichtenhofer, A. Pinz, and R. P. Wildes, “Bags of spacetime energies for dynamic scene recognition,” in CVPR, pp. 2681–2688, 2014.
  • [66] X. Ren and M. Philipose, “Egocentric recognition of handled objects: Benchmark and analysis,” in CVPRW, pp. 1–8, IEEE, 2009.
  • [67] G. Varol, I. Laptev, and C. Schmid, “Long-term Temporal Convolutions for Action Recognition,” TPAMI, 2017.
  • [68] L. Jing, X. Yang, and Y. Tian, “Video you only look once: Overall temporal convolutions for action recognition,” JVCIR, vol. 52, pp. 58–65, 2018.
  • [69] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in CVPR, pp. 4724–4733, IEEE, 2017.
  • [70] K. Hara, H. Kataoka, and Y. Satoh, “Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet,” in CVPR, pp. 18–22, 2018.
  • [71] Z. Qiu, T. Yao, and T. Mei, “Learning spatio-temporal representation with pseudo-3d residual networks,” in ICCV, 2017.
  • [72] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE transactions on neural networks, vol. 5, no. 2, pp. 157–166, 1994.
  • [73] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [74] J. Y. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond Short Snippets: Deep Networks for Video Classification,” in CVPR, 2015.
  • [75] Z. Li, K. Gavrilyuk, E. Gavves, M. Jain, and C. G. Snoek, “Videolstm convolves, attends and flows for action recognition,” CVIU, vol. 166, pp. 41–50, 2018.
  • [76] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko, “Sequence to sequence – video to text,” in ICCV, 2015.
  • [77] P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kautz, “Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network,” in CVPR, pp. 4207–4215, 2016.
  • [78] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in CVPR, 2017.
  • [79] D. Bau, J.-Y. Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Freeman, and A. Torralba, “Gan dissection: Visualizing and understanding generative adversarial networks,” arXiv preprint arXiv:1811.10597, 2018.
  • [80] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in ECCV, pp. 818–833, Springer, 2014.
  • [81] D. Pathak, R. Girshick, P. Dollár, T. Darrell, and B. Hariharan, “Learning features by watching objects move,” in CVPR, vol. 2, 2017.
  • [82] G. Larsson, M. Maire, and G. Shakhnarovich, “Colorization as a proxy task for visual understanding,” in CVPR, 2017.
  • [83] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, pp. 2672–2680, 2014.
  • [84]

    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in

    ICCV, 2017.
  • [85] C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos with scene dynamics,” in NIPS, pp. 613–621, 2016.
  • [86] S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” CVPR, 2018.
  • [87] U. Ahsan, R. Madhok, and I. Essa, “Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition,” arXiv preprint arXiv:1808.07507, 2018.
  • [88] C. Wei, L. Xie, X. Ren, Y. Xia, C. Su, J. Liu, Q. Tian, and A. L. Yuille, “Iterative reorganization with weak spatial constraints: Solving arbitrary jigsaw puzzles for unsupervised representation learning,” arXiv preprint arXiv:1812.00329, 2018.
  • [89] D. Kim, D. Cho, D. Yoo, and I. S. Kweon, “Learning image representations by completing damaged jigsaw puzzles,” arXiv preprint arXiv:1802.01880, 2018.
  • [90] D. Wei, J. Lim, A. Zisserman, and W. T. Freeman, “Learning and using the arrow of time,” in CVPR, pp. 8052–8060, 2018.
  • [91] I. Croitoru, S.-V. Bogolin, and M. Leordeanu, “Unsupervised learning from video to detect foreground objects in single images,” arXiv preprint arXiv:1703.10901, 2017.
  • [92] H. Jiang, G. Larsson, M. Maire Greg Shakhnarovich, and E. Learned-Miller, “Self-supervised relative depth learning for urban scene understanding,” in ECCV, pp. 19–35, 2018.
  • [93] R. Arandjelovic and A. Zisserman, “Look, listen and learn,” in ICCV, pp. 609–617, IEEE, 2017.
  • [94] P. Agrawal, J. Carreira, and J. Malik, “Learning to see by moving,” in ICCV, pp. 37–45, 2015.
  • [95] D. Jayaraman and K. Grauman, “Learning image representations tied to ego-motion,” in ICCV, pp. 1413–1421, 2015.
  • [96] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” IJCV, vol. 88, no. 2, pp. 303–338, 2010.
  • [97] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in CVPR, pp. 3213–3223, 2016.
  • [98] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes through the ade20k dataset,” arXiv preprint arXiv:1608.05442, 2016.
  • [99] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in ECCV, pp. 740–755, Springer, 2014.
  • [100] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in CVPR, pp. 779–788, 2016.
  • [101] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in CVPR, 2017.
  • [102] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in ECCV, pp. 21–37, Springer, 2016.
  • [103] T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, “Feature pyramid networks for object detection.,” in CVPR, vol. 1, p. 4, 2017.
  • [104] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” TPAMI, 2018.
  • [105] J. A. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural processing letters, vol. 9, no. 3, pp. 293–300, 1999.
  • [106] H. Kuehne, H. Jhuang, R. Stiefelhagen, and T. Serre, “Hmdb51: A large video database for human motion recognition,” in HPCSE, pp. 571–582, Springer, 2013.
  • [107]

    B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” in

    NIPS, pp. 487–495, 2014.
  • [108] B. Zhou, A. Khosla, A. Lapedriza, A. Torralba, and A. Oliva, “Places: An image database for deep scene understanding,” arXiv preprint arXiv:1610.02055, 2016.
  • [109] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser, “Semantic scene completion from a single depth image,” in CVPR, pp. 190–198, IEEE, 2017.
  • [110] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [111] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPSW, vol. 2011, p. 5, 2011.
  • [112] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” tech. rep., Citeseer, 2009.
  • [113] A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    , pp. 215–223, 2011.
  • [114] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, “The new data and new challenges in multimedia research,” arXiv preprint arXiv:1503.01817, vol. 1, no. 8, 2015.
  • [115] J. McCormac, A. Handa, S. Leutenegger, and A. J. Davison, “Scenenet rgb-d: Can 5m synthetic images beat generic imagenet pre-training on indoor segmentation,” in ICCV, vol. 4, 2017.
  • [116] M. Monfort, B. Zhou, S. A. Bargal, T. Yan, A. Andonian, K. Ramakrishnan, L. Brown, Q. Fan, D. Gutfruend, C. Vondrick, et al., “Moments in time dataset: one million videos for event understanding,”
  • [117] J. F. Gemmeke, D. P. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in ICASSP, pp. 776–780, IEEE, 2017.
  • [118] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in CVPR, pp. 3354–3361, IEEE, 2012.
  • [119] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache, “Learning visual features from large weakly supervised data,” in ECCV, pp. 67–84, 2016.
  • [120] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [121] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
  • [122] J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial feature learning,” arXiv preprint arXiv:1605.09782, 2016.
  • [123] T. Chen, X. Zhai, and N. Houlsby, “Self-supervised gan to counter forgetting,” arXiv preprint arXiv:1810.11598, 2018.
  • [124] G. Larsson, M. Maire, and G. Shakhnarovich, “Learning representations for automatic colorization,” in ECCV, pp. 577–593, Springer, 2016.
  • [125] S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and Locally Consistent Image Completion,” SIGGRAPH, 2017.
  • [126] S. Jenni and P. Favaro, “Self-supervised feature learning by learning to spot artifacts,” arXiv preprint arXiv:1806.05024, 2018.
  • [127] R. Santa Cruz, B. Fernando, A. Cherian, and S. Gould, “Visual permutation learning,” TPAMI, 2018.
  • [128] J. Yang, D. Parikh, and D. Batra, “Joint unsupervised learning of deep representations and image clusters,” in CVPR, pp. 5147–5156, 2016.
  • [129]

    J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in

    ICML, pp. 478–487, 2016.
  • [130] M. Noroozi, H. Pirsiavash, and P. Favaro, “Representation learning by learning to count,” in ICCV, 2017.
  • [131] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
  • [132] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in NIPS, pp. 2234–2242, 2016.
  • [133] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in NIPS, pp. 6626–6637, 2017.
  • [134] A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” arXiv preprint arXiv:1809.11096, 2018.
  • [135] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” arXiv preprint arXiv:1812.04948, 2018.
  • [136]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,”

    arxiv, 2016.
  • [137] R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, “Real-time user-guided image colorization with learned deep priors,” arXiv preprint arXiv:1705.02999, 2017.
  • [138] S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” TOG, vol. 35, no. 4, p. 110, 2016.
  • [139] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: a review,” ACM computing surveys (CSUR), vol. 31, no. 3, pp. 264–323, 1999.
  • [140] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in CVPR, vol. 1, pp. 886–893, IEEE, 2005.
  • [141] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” IJCV, vol. 105, no. 3, pp. 222–245, 2013.
  • [142] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics, 2017.
  • [143] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16, 2017.
  • [144]

    M. Saito, E. Matsumoto, and S. Saito, “Temporal generative adversarial nets with singular value clipping,” in

    ICCV, vol. 2, p. 5, 2017.
  • [145] C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Murphy, “Tracking emerges by colorizing videos,” in ECCV, 2018.
  • [146] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” in NIPS, pp. 802–810, 2015.
  • [147] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee, “Decomposing motion and content for natural video sequence prediction,” in ICLR, 2017.
  • [148] Z. Luo, B. Peng, D.-A. Huang, A. Alahi, and L. Fei-Fei, “Unsupervised learning of long-term motion dynamics for videos,” in CVPR, 2017.
  • [149] B. Brattoli, U. Büchler, A.-S. Wahl, M. E. Schwab, and B. Ommer, “Lstm self-supervision for detailed behavior analysis,” in CVPR, pp. 3747–3756, IEEE, 2017.
  • [150] D. Jayaraman and K. Grauman, “Slow and steady feature analysis: higher order temporal coherence in video,” in CVPR, pp. 3852–3861, 2016.
  • [151] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in ICCV, pp. 2758–2766, 2015.
  • [152] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in CVPR, pp. 1647–1655, IEEE, 2017.
  • [153] S. Meister, J. Hur, and S. Roth, “Unflow: Unsupervised learning of optical flow with a bidirectional census loss,” in AAAI, 2018.
  • [154] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba, “Ambient sound provides supervision for visual learning,” in ECCV, pp. 801–816, Springer, 2016.
  • [155] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe, “Unsupervised learning of depth and ego-motion from video,” in CVPR, vol. 2, p. 7, 2017.
  • [156] Z. Yin and J. Shi, “Geonet: Unsupervised learning of dense depth, optical flow and camera pose,” in CVPR, vol. 2, 2018.
  • [157] Y. Zou, Z. Luo, and J.-B. Huang, “Df-net: Unsupervised joint learning of depth and flow using cross-task consistency,” in ECCV, pp. 38–55, Springer, 2018.
  • [158] G. Iyer, J. K. Murthy, G. Gupta, K. M. Krishna, and L. Paull, “Geometric consistency for self-supervised end-to-end visual odometry,” arXiv preprint arXiv:1804.03789, 2018.
  • [159] Y. Zhang, S. Khamis, C. Rhemann, J. Valentin, A. Kowdle, V. Tankovich, M. Schoenberg, S. Izadi, T. Funkhouser, and S. Fanello, “Activestereonet: End-to-end self-supervised learning for active stereo systems,” in ECCV, pp. 784–801, 2018.
  • [160] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Deep end2end voxel2voxel prediction,” in CVPRW, pp. 17–24, 2016.
  • [161] M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” in ICLR, 2016.
  • [162] F. A. Reda, G. Liu, K. J. Shih, R. Kirby, J. Barker, D. Tarjan, A. Tao, and B. Catanzaro, “Sdc-net: Video prediction using spatially-displaced convolution,” in ECCV, pp. 718–733, 2018.
  • [163] M. Babaeizadeh, C. Finn, D. Erhan, R. H. Campbell, and S. Levine, “Stochastic variational video prediction,” arXiv preprint arXiv:1710.11252, 2017.
  • [164] X. Liang, L. Lee, W. Dai, and E. P. Xing, “Dual motion gan for future-flow embedded video prediction,” in ICCV, vol. 1, 2017.
  • [165] C. Finn, I. Goodfellow, and S. Levine, “Unsupervised learning for physical interaction through video prediction,” in NIPS, pp. 64–72, 2016.
  • [166] P. Krähenbühl, “Free supervision from video games,” in CVPR, June 2018.
  • [167] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan, “Youtube-8m: A large-scale video classification benchmark,” arXiv preprint arXiv:1609.08675, 2016.
  • [168] L. Gomez, Y. Patel, M. Rusiñol, D. Karatzas, and C. Jawahar, “Self-supervised learning of visual features through embedding images into text topic spaces,” in CVPR, IEEE, 2017.