Self-Supervised Learning of Pretext-Invariant Representations

12/04/2019 ∙ by Ishan Misra, et al. ∙ 0

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as "pearl") that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised learning of image representations with good invariance properties.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern image-recognition systems learn image representations from large collections of images and corresponding semantic annotations. These annotations can be provided in the form of class labels [58], hashtags [41], bounding boxes [15, 39], etc. Pre-defined semantic annotations scale poorly to the long tail of visual concepts [66], which hampers further improvements in image recognition.

Self-supervised learning tries to address these limitations by learning image representations from the pixels themselves without relying on pre-defined semantic annotations. Often, this is done via a pretext task that applies a transformation to the input image and requires the learner to predict properties of the transformation from the transformed image (see Figure 1). Examples of image transformations used include rotations [18], affine transformations [76, 49, 57, 31], and jigsaw transformations [46]. As the pretext task involves predicting a property of the image transformation, it encourages the construction og image representations that are covariant to the transformations. Although such covariance is beneficial for tasks such as predicting 3D correspondences [49, 31, 57], it is undesirable for most semantic recognition tasks. Representations ought to be invariant under image transformations to be useful for image recognition [29, 13] because the transformations do not alter visual semantics.

Figure 1: Pretext-Invariant Representation Learning (PIRL). Many pretext tasks for self-supervised learning [18, 46, 76] involve transforming an image , computing a representation of the transformed image, and predicting properties of transformation from that representation. As a result, the representation must covary with the transformation and may not contain much semantic information. By contrast, PIRL learns representations that are invariant to the transformation and retain semantic information.

Motivated by this observation, we propose a method that learns invariant representations rather than covariant ones. Instead of predicting properties of the image transformation, Pretext-Invariant Representation Learning (PIRL) constructs image representations that are similar to the representation of transformed versions of the same image and different from the representations of other images. We adapt the “Jigsaw” pretext task [46]

to work with PIRL and find that the resulting invariant representations perform better than their covariant counterparts across a range of vision tasks. PIRL substantially outperforms all prior art in self-supervised learning from ImageNet (Figure 

2) and from uncurated image data (Table 4). Interestingly, PIRL even outperforms supervised pre-training in learning image representations suitable for object detection (Tables 16).

2 PIRL: Pretext-Invariant
Representation Learning

Figure 2: ImageNet classification with linear models. Single-crop top-1 accuracy on the ImageNet validation data as a function of the number of parameters in the model that produces the representation (“A” represents AlexNet). Pretext-Invariant Representation Learning (PIRL) sets a new state-of-the-art in this setting (red marker) and uses significantly smaller models (ResNet-50). See Section 3.2 for more details.

Our work focuses on pretext tasks for self-supervised learning in which a known image transformation is applied to the input image. For example, the “Jigsaw” task divides the image into nine patches and perturbs the image by randomly permuting the patches [46]. Prior work used Jigsaw as a pretext task by predicting the permutation from the perturbed input image. This requires the learner to construct a representation that is covariant to the perturbation. The same is true for a range of other pretext tasks that have recently been studied [18, 9, 44, 76]. In this work, we adopt the existing Jigsaw pretext task in a way that encourages the image representations to be invariant to the image patch perturbation. While we focus on the Jigsaw pretext task in this paper, our approach is applicable to any pretext task that involves image transformations (see Section 4.3).

2.1 Overview of the Approach

Suppose we are given an image dataset, with , and a set of image transformations, . The set may contain transformations such as a re-shuffling of patches in the image [46], image rotations [18], etc. We aim to train a convolutional network, , with parameters that constructs image representations that are invariant to image transformations . We adopt an empirical risk minimization approach to learning the network parameters . Specifically, we train the network by minimizing the empirical risk:

(1)

where is some distribution over the transformations in , and denotes image after application of transformation , that is, . The function

is a loss function that measures the similarity between two image representations. Minimization of this loss encourages the network

to produce the same representation for image as for its transformed counterpart , i.e., to make representation invariant under transformation .

We contrast our loss function to losses [18, 9, 44, 46, 76] that aim to learn image representations that are covariant to image transformations by minimizing:

(2)

where is a function that measures some properties of transformation . Such losses encourage network to learn image representations that contain information on transformation , thereby encouraging it to maintain information that is not semantically relevant.

Loss function. We implement using a contrastive loss function  [22]. Specifically, we define a matching score,

, that measures the similarity of two image representations and use this matching score in a noise contrastive estimator 

[21]. In our noise contrastive estimator (NCE), each “positive” sample has corresponding “negative” samples. The negative samples are obtained by computing features from other images,

. The noise contrastive estimator models the probability of the binary event that

originates from data distribution as:

(3)

Herein, is a set of negative samples that are drawn uniformly at random from dataset excluding image , is a temperature parameter, and

is the cosine similarity between the representations.

In practice, we do not use the convolutional features directly but apply to different “heads” to the features before computing the score . Specifically, we apply head on features () of and head on features () of ; see Figure 3 and Section 2.3. NCE then amounts to minimizing the following loss:

(4)

This loss encourages the representation of image to be similar to that of its transformed counterpart , whilst also encouraging the representation of to be dissimilar to that of other images .

Figure 3: Overview of PIRL. Pretext-Invariant Representation Learning (PIRL) aims to construct image representations that are invariant to the image transformations . PIRL encourages the representations of the image, , and its transformed counterpart, , to be similar. It achieves this by minimizing a contrastive loss (see Section 2.1). Following [72], PIRL uses a memory bank, , of negative samples to be used in the contrastive learning. The memory bank contains a moving average of representations, , for all images in the dataset (see Section 2.2).

2.2 Using a Memory Bank of Negative Samples

Prior work has found that it is important to use a large number of negatives in the NCE loss of Equation 4 [51, 72]. In a mini-batch SGD optimizer, it is difficult to obtain a large number of negatives without increasing the batch to an infeasibly large size. To address this problem, we follow [72] and use a memory bank of “cached” features. Concurrent work used a similar memory-bank approach [24].

The memory bank, , contains a feature representation for each image in dataset . The representation is an exponential moving average of feature representations

that were computed in prior epochs. This allows us to replace negative samples,

, by their memory bank representations, , in Equation 4 without having to increase the training batch size. We emphasize that the representations that are stored in the memory bank are all computed on the original images, , without the transformation .

Final loss function. A potential issue of the loss in Equation 4 is that it does not compare the representations of untransformed images and . We address this issue by using a convex combination of two NCE loss functions in :

(5)

Herein, the first term is simply the loss of Equation 4 but uses memory representations and instead of and , respectively. The second term does two things: (1) it encourages the representation to be similar to its memory representation , thereby dampening the parameter updates; and (2) it encourages the representations and to be dissimilar. We note that both the first and the second term use instead of in Equation 4. Setting in Equation 2.2 leads to the loss used in [72]. We study the effect of on the learned representations in Section 4.

2.3 Implementation Details

Although PIRL can be used with any pretext task that involves image transformations, we focus on the Jigsaw pretext task [46] in this paper. To demonstrate that PIRL is more generally applicable, we also experiment with the Rotation pretext task [18] and with a combination of both tasks in Section 4.3. Below, we describe the implementation details of PIRL with the Jigsaw pretext task.

Convolutional network. We use a ResNet-50 (R-50) network architecture in our experiments [25]. The network is used to compute image representations for both and . These representations are obtained by applying function or

on features extracted from the the network.

Specifically, we compute the representation of , , by extracting res5 features, average pooling, and a linear projection to obtain a -dimensional representation.

To compute the representation of a transformed image , we closely follow [46, 19]. We: (1) extract nine patches from image , (2) compute an image representation for each patch separately by extracting activations from the res5 layer of the ResNet-50 and average pool the activations, (3) apply a linear projection to obtain a -dimensional patch representations, and (4) concatenate the patch representations in random order and apply a second linear projection on the result to obtain the final -dimensional image representation, . Our motivation for this design of is the desire to remain as close as possible to the covariant pretext task of [46, 18, 19]. This allows apples-to-apples comparisons between the covariant approach and our invariant approach.

Hyperparameters. We implement the memory bank as described in [72]

and use the same hyperparameters for the memory bank. Specifically, we set the temperature in Equation 

3 to , and use a weight of to compute the exponential moving averages in the memory bank. Unless stated otherwise, we use in Equation 2.2.

3 Experiments

Following common practice in self-supervised learning [78, 19]

, we evaluate the performance of PIRL in transfer-learning experiments. We perform experiments on a variety of datasets, focusing on object detection and image classification tasks. Our empirical evaluations cover: (1) a learning setting in which the parameters of the convolutional network are

finetuned during transfer, thus evaluating the network “initialization” obtained using self-supervised learning and (2) a learning setting in which the parameters of the network are fixed during transfer learning, thus using the network as a feature extractor. Code reproducing the results of our experiments will be published online.

Baselines. Our most important baseline is the Jigsaw ResNet-50 model of [19]. This baseline implements the covariant counterpart of our PIRL approach with the Jigsaw pretext task.

We also compare PIRL to a range of other self-supervised methods. An important comparison is to NPID [72]. NPID is a special case of PIRL: setting in Equation 2.2 leads to the loss function of NPID. We found it is possible to improve the original implementation of NPID by using more negative samples and training for more epochs (see Section 4). We refer to our improved version of NPID as NPID++. Comparisons between PIRL and NPID++ allow us to study the effect of the pretext-invariance that PIRL aims to achieve, i.e., the effect of using in Equation 2.2.

Pre-training data. To facilitate comparisons with prior work, we use the M images from the ImageNet [58] train split (without labels) to pre-train our models.

Training details. We train our models using mini-batch SGD using the cosine learning rate decay [40] scheme with an initial learning rate of and a final learning rate of . We train the models for epochs using a batch size of images and using negative samples in Equation 3. We do not use data-augmentation approaches such as Fast AutoAugment [38] because they are the result of supervised-learning approaches. We provide a full overview of all hyperparameter settings that were used in the supplemental material.

Transfer learning. Prior work suggests that the hyperparameters used in transfer learning can play an important role in the evaluation pre-trained representations [78, 19, 33]. To facilitate fair comparisons with prior work, we closely follow the transfer-learning setup described in [19, 78].

Method Network
Supervised R-50 52.6 81.1 57.4 =0.0
Jigsaw [19] R-50 48.9 75.1 52.9 -4.5
Rotation [19] R-50 46.3 72.5 49.3 -8.1
NPID++ [72] R-50 52.3 79.1 56.9 -0.5
PIRL (ours) R-50 54.0 80.7 59.7 +2.3

CPC-Big  [26] R-101  70.6
CPC-Huge  [26] R-170  72.1
MoCo [24] R-50 55.2 81.4 61.2
Table 1: Object detection on VOC07+12 using Faster R-CNN. Detection AP on the VOC07 test set after finetuning Faster R-CNN models (keeping BatchNorm fixed) with a ResNet-50 backbone pre-trained using self-supervised learning on ImageNet. Results for supervised ImageNet pre-training are presented for reference. Numbers with are adopted from the corresponding papers. Method with finetunes BatchNorm. PIRL significantly outperforms supervised pre-training without extra pre-training data or changes in the network architecture. Additional results in Table 6.

3.1 Object Detection

Following prior work [19, 72], we perform object-detection experiments on the the Pascal VOC dataset [15] using the VOC07+12 train split. We use the Faster R-CNN [56] C4 object-detection model implemented in Detectron2 [71] with a ResNet-50 (R-50) backbone. We pre-train the ResNet-50 using PIRL to initialize the detection model before finetuning it on the VOC training data. We use the same training schedule as [19] for all models finetuned on VOC and follow [19, 71] to keep the BatchNorm parameters fixed during finetuning. We evaluate object-detection performance in terms of AP, AP, and AP [39].

The results of our detection experiments are presented in Table 1. The results demonstrate the strong performance of PIRL: it outperforms all alternative self-supervised learnings in terms of all three AP measures. Compared to pre-training on the Jigsaw pretext task, PIRL achieves AP improvements of points. These results underscore the importance of learning invariant (rather than covariant) image representations. PIRL also outperforms NPID++, which demonstrates the benefits of learning pretext invariance.

Interestingly, PIRL even outperforms the supervised ImageNet-pretrained model in terms of the more conservative AP and AP metrics. Similar to concurrent work [24], we find that a self-supervised learner can outperform supervised pre-training for object detection. We emphasize that PIRL achieves this result using the same backbone model, the same number of finetuning epochs, and the exact same pre-training data (but without the labels). This result is a substantial improvement over prior self-supervised approaches that obtain slightly worse performance than fully supervised baselines despite using orders of magnitude more curated training data [19] or much larger backbone models [26]. In Table 6, we show that PIRL also outperforms supervised pretraining when finetuning is done on the much smaller VOC07 train+val set. This suggests that PIRL learns image representations that are amenable to sample-efficient supervised learning.

Method Parameters Transfer Dataset
ImageNet VOC07 Places205 iNat.
ResNet-50 using evaluation setup of [19]
Supervised 25.6M 75.9 87.5 51.5 45.4
Colorization [19] 25.6M 39.6 55.6 37.5
Rotation [18] 25.6M 48.9 63.9 41.4 23.0
NPID++ [72] 25.6M 59.0 76.6 46.4 32.4
MoCo [24] 25.6M 60.6
Jigsaw [19] 25.6M 45.7 64.5 41.2 21.3
PIRL (ours) 25.6M 63.6 81.1 49.8 34.1

Different architecture or evaluation setup
NPID [72] 25.6M 54.0 45.5
BigBiGAN [12] 25.6M 56.6
AET [76] 61M 40.6 37.1
DeepCluster [6] 61M 39.8 37.5
Rot. [33] 61M 54.0 45.5
LA [80] 25.6M  60.2  50.2
CMC [64] 51M 64.1
CPC [51] 44.5M 48.7
CPC-Huge  [26] 305M 61.0
BigBiGAN-Big [12] 86M 61.3
AMDIM [4] 670M 68.1 55.1
Table 2: Image classification with linear models. Image-classification performance on four datasets using the setup of [19]

. We train linear classifiers on image representations obtained by self-supervised learners that were pre-trained on ImageNet (without labels). We report the performance for the best-performing layer for each method. We measure mean average precision (mAP) on the VOC07 dataset and top-1 accuracy on all other datasets. Numbers for PIRL, NPID++, Rotation were obtained by us; the other numbers were adopted from their respective papers. Numbers with

were measured using 10-crop evaluation. The best-performing self-supervised learner on each dataset is
boldfaced.

3.2 Image Classification with Linear Models

Next, we assess the quality of image representations by training linear classifiers on fixed image representations. We follow the evaluation setup from [19] and measure the performance of such classifiers on four image-classification datasets: ImageNet [58], VOC07 [15], Places205 [79], and iNaturalist2018 [65]

. These datasets involve diverse tasks such as object classification, scene recognition and fine-grained recognition. Following 

[19], we evaluate representations extracted from all intermediate layers of the pre-trained network, and report the image-classification results for the best-performing layer in Table 2.

ImageNet results. The results on ImageNet highlight the benefits of learning invariant features: PIRL improves recognition accuracies by over compared to its covariant counterpart, Jigsaw. PIRL achieves the highest single-crop top-1 accuracy of all self-supervised learners that use a single ResNet-50 model.

The benefits of pretext invariance are further highlighted by comparing PIRL with NPID. Our re-implementation of NPID (called NPID++) substantially outperforms the results reported in [72]. Specifically, NPID++ achieves a single-crop top-1 accuracy of , which is higher or on par with existing work that uses a single ResNet-50. Yet, PIRL substantially outperforms NPID++. We note that PIRL also outperforms concurrent work [24] in this setting.

Akin to prior approaches, the performance of PIRL improves with network size. For example, CMC [64] uses a combination of two ResNet-50 models and trains the linear classifier for longer to obtain accuracy. We performed an experiment in which we did the same for PIRL, and obtained a top-1 accuracy of ; see “PIRL-ens.” in Figure 2. To compare PIRL with larger models, we also performed experiments in which we followed [33, 75] by doubling the number of channels in ResNet-50; see “PIRL-c2x” in Figure 2. PIRL-c2x achieves a top-1 accuracy of , which is close to the accuracy obtained by AMDIM [4] with a model that has more parameters.

Altogether, the results in Figure 2 demonstrate that PIRL outperforms all prior self-supervised learners on ImageNet in terms of the trade-off between model accuracy and size. Indeed, PIRL even outperforms most self-supervised learners that use much larger models [51, 26].

Results on other datasets. The results on the other image-classification datasets in Table 2 are in line with the results on ImageNet: PIRL substantially outperforms its covariant counterpart (Jigsaw). The performance of PIRL is within 2% of fully supervised representations on Places205, and improves the previous best results of [19] on VOC07 by more than 16 AP points. On the challenging iNaturalist dataset, which has over classes, we obtain a gain of 11% in top-1 accuracy over the prior best result [18]. We observe that the NPID++ baseline performs well on these three datasets but is consistently outperformed by PIRL. Indeed, PIRL sets a new state-of-the-art for self-supervised representations in this learning setting on the VOC07, Places205, and iNaturalist datasets.

3.3 Semi-Supervised Image Classification

We perform semi-supervised image classification experiments on ImageNet following the experimental setup of [26, 72, 75]. Specifically, we randomly select and of the ImageNet training data (with labels). We finetune our models on these training-data subsets following the procedure of [72]. Table 3 reports the top-5 accuracy of the resulting models on the ImageNet validation set.

The results further highlight the quality of the image representations learned by PIRL: finetuning the models on just (13,000) labeled images leads to a top-5 accuracy of . PIRL performs at least as well as S[75] and better than VAT [20]

, which are both methods specifically designed for semi-supervised learning. In line with earlier results, PIRL also outperforms Jigsaw and NPID++.

Data fraction    1% 10%
Method Backbone Top-5 Accuracy
Random initialization [72] R-50 22.0 59.0
NPID [72] R-50 39.2 77.4
Jigsaw [19] R-50 45.3 79.3
NPID++ [72] R-50 52.6 81.5
VAT + Ent Min. [45, 20] R-50v2 47.0 83.4
SL Exemplar [75] R-50v2 47.0 83.7
SL Rotation [75] R-50v2 53.4 83.8
PIRL (ours) R-50 57.2 83.8
Colorization [36] R-152 29.8 62.0
CPC-Largest  [26] R-170 and R-11 64.0 84.9
Table 3: Semi-supervised learning on ImageNet. Single-crop top-5 accuracy on the ImageNet validation set of self-supervised models that are finetuned on 1% and 10% of the ImageNet training data, following [72]. All numbers except for Jigsaw, NPID++ and PIRL are adopted from the corresponding papers. Best performance is boldfaced.

3.4 Pre-Training on Uncurated Image Data

Most representation learning methods are sensitive to the data distribution used during pre-training [30, 62, 41, 19]. To study how much changes in the data distribution impact PIRL, we pre-train models on uncurated images from the unlabeled YFCC dataset [63]. Following [7, 19], we randomly select a subset of 1 million images (YFCC-1M) from the 100 million images in YFCC. We pre-train PIRL ResNet-50 networks on YFCC-1M using the same procedure that was used for ImageNet pre-training. We evaluate using the setup in Section 3.2 by training linear classifiers on fixed image representations.

Table 4 reports the top-1 accuracy of the resulting classifiers. In line with prior results, PIRL outperforms competing self-supervised learners. In fact, PIRL even outperforms Jigsaw and DeeperCluster models that were trained on more data from the same distribution. Comparing pre-training on ImageNet (Table 2) with pre-training YFCC-1M (Table 4) leads to a mixed set of observations. On ImageNet classification, pre-training (without labels) on ImageNet works substantially better than pre-training on YFCC-1M. In line with prior work [30, 19], however, pre-training on YFCC-1M leads to better representations for image classification on the Places205 dataset.

Method Dataset Transfer Dataset
ImageNet VOC07 Places205 iNat.

Jigsaw [19] YFCC1M 64.0 42.1
DeepCluster [6, 7] YFCC1M 34.1 63.9 35.4
PIRL (ours) YFCC1M 57.8 78.8 51.0 29.7

Jigsaw [19] YFCC100M 48.3 71.0 44.8
DeeperCluster [7] YFCC100M 45.6 73.0 42.1
Table 4: Pre-training on uncurated YFCC images. Top-1 accuracy or mAP (for VOC07) of linear image classifiers for four image-classification tasks, using various image representations. All numbers (except those for PIRL) are adopted from the corresponding papers. Deep(er)Cluster uses VGG-16 rather than ResNet-50. The best performance on each dataset is boldfaced. Top: Representations obtained by training ResNet-50 models on a randomly selected subset of one million images. Bottom: Representations learned from about 100 million YFCC images.

4 Analysis

We performed a set of experiments aimed at better understanding the properties of PIRL. To make it feasible to train the larger number of models needed for these experiments, we train the models we study in this section for fewer epochs () and with fewer negatives () than in Section 3. As a result, we obtain lower absolute performances. Apart from that, we did not change the experimental setup or any of the other hyperparameters. Throughout the section, we use the evaluation setup from Section 3.2 that trains linear classifiers on fixed image representations to measure the quality of image representations.

Figure 4: Invariance of PIRL representations. Distribution of distances between unit-norm image representations, , and unit-norm representations of the transformed image, . Distance distributions are shown for PIRL and Jigsaw representations.

4.1 Analyzing PIRL Representations

Does PIRL learn invariant representations?


PIRL was designed to learn representations that are invariant to image transformation . We analyzed whether the learned representations actually have the desired invariance properties. Specifically, we normalize the representations to have unit norm and compute distances between the (normalized) representation of image, , and the (normalized) representation its transformed version, . We repeat this for all transforms and for a large set of images. We plot histograms of the distances thus obtained in Figure 4

. The figure shows that, for PIRL, an image representation and the representation of a transformed version of that image are generally similar. This suggests that PIRL has learned representations that are invariant to the transformations. By contrast, the distances between Jigsaw representations have a much larger mean and variance, which suggests that Jigsaw representations covary with the image transformations that were applied.

Which layer produces the best representations?


All prior experiments used PIRL representations that were extracted from the res5 layer and Jigsaw representations that were extracted from the res4 layer (which work better for Jigsaw). Figure 5 studies the quality of representations in earlier layers of the convolutional networks. The figure reveals that the quality of Jigsaw representations improves from the conv1 to the res4 layer but that their quality sharply decreases in the res5 layer. We surmise this happens because the res5 representations in the last layer of the network covary with the image transformation and are not encouraged to contain semantic information. By contrast, PIRL representations are invariant to image transformations, which allows them to focus on modeling semantic information. As a result, the best image representations are extracted from the res5 layer of PIRL-trained networks.

Figure 5: Quality of PIRL representations per layer. Top-1 accuracy of linear models trained to predict ImageNet classes based on representations extracted from various layers in ResNet-50 trained using PIRL and Jigsaw.
Figure 6: Effect of varying the trade-off parameter . Top-1 accuracy of linear classifiers trained to predict ImageNet classes from PIRL representations as a function of hyperparameter in Equation 2.2.

4.2 Analyzing the PIRL Loss Function

What is the effect of in the PIRL loss function?


The PIRL loss function in Equation 2.2 contains a hyperparameter that trades off between two NCE losses. All prior experiments were performed with . NPID(++) [72] is a special case of PIRL in which , effectively removing the pretext-invariance term from the loss. At , the network does not compare untransformed images at training time and updates to the memory bank are not dampened.

We study the effect of on the quality of PIRL representations. As before, we measure representation quality by the top-1 accuracy of linear classifiers operating on fixed ImageNet representations. Figure 6 shows the results of these experiments. The results show that the performance of PIRL is quite sensitive to the setting of , and that the best performance if obtained by setting .

Figure 7: Effect of varying the number of patch permutations in . Performance of linear image classification models trained on the VOC07 dataset in terms of mAP. Models are initialized by PIRL and Jigsaw, varying the number of image transformations, , from to million.

What is the effect of the number of image transforms?

Both in PIRL and Jigsaw, it is possible to vary the complexity of the task by varying the number of permutations of the nine image patches that are included in the set of image transformations, . Prior work on Jigsaw suggests that increasing the number of possible patch permutations leads to better performance [46, 19]. However, the largest value can take is restricted because the number of learnable parameters in the output layer grows linearly with the number of patch permutations in models trained to solve the Jigsaw task. This problem does not apply to PIRL because it never outputs the patch permutations, and thus has a fixed number of model parameters. As a result, PIRL can use all million permutations in .

We study the quality of PIRL and Jigsaw as a function of the number of patch permutations included in . To facilitate comparison with [19], we measure quality in terms of performance of linear models on image classification using the VOC07 dataset, following the same setup as in Section 3.2. The results of these experiments are presented in Figure 7. The results show that PIRL outperforms Jigsaw for all cardinalities of but that PIRL particularly benefits from being able to use very large numbers of image transformations (i.e., large ) during training.

What is the effect of the number of negative samples?

We study the effect of the number of negative samples, , on the quality of the learned image representations. We measure the accuracy of linear ImageNet classifiers on fixed representations produced by PIRL as a function of the value of used in pre-training. The results of these experiments are presented in Figure 8. They suggest that increasing the number of negatives tends to have a positive influence on the quality of the image representations constructed by PIRL.

Figure 8: Effect of varying the number of negative samples. Top-1 accuracy of linear classifiers trained to perform ImageNet classification using PIRL representations as a function of the number of negative samples, .

4.3 Generalizing PIRL to Other Pretext Tasks

Although we studied PIRL in the context of Jigsaw in this paper, PIRL can be used with any set of image transformations, . We performed an experiment evaluating the performance of PIRL using the Rotation pretext task [18]. We define to contain image rotations by , and measure representation quality in terms of image-classification accuracy of linear models (see the supplemental material for details).

The results of these experiments are presented in Table 5 (top). In line with earlier results, models trained using PIRL (Rotation) outperform those trained using the Rotation pretext task of [18]. The performance gains obtained from learning a rotation-invariant representation are substantial, e.g. top-1 accuracy on ImageNet. We also note that PIRL (Rotation) outperforms NPID++ (see Table 2). In a second set of experiments, we combined the pretext image transforms from both the Jigsaw and Rotation tasks in the set of image transformations, . Specifically, we obtain by first applying a rotation and then performing a Jigsaw transformation. The results of these experiments are shown in Table 5 (bottom). The results demonstrate that combining image transforms from multiple pretext tasks can further improve image representations.

5 Related Work

Our study is related to prior work that tries to learn characteristics of the image distribution without considering a corresponding (image-conditional) label distribution. A variety of work has studied reconstructing images from a small, intermediate representation, e.g., using sparse coding [50], adversarial training [11, 43, 12]

, autoencoders 

[42, 55, 67], or probabilistic versions thereof [59].

More recently, interest has shifted to specifying pretext tasks [9] that require modeling a more limited set of properties of the data distribution. For video data, these pretext tasks learn representations by ordering video frames [44, 16, 37, 70, 1, 74, 32], tracking [68, 54], or using cross-modal signals like audio [53, 3, 2, 34, 17, 52].

Our work focuses on image-based pretext tasks. Prior pretext tasks include image colorization [8, 28, 35, 36, 77, 78], orientation prediction [18], affine transform prediction [76], predicting contextual image patches [9], re-ordering image patches [19, 46, 48, 5], counting visual primitives [47], or their combinations [10]. In contrast, our works learns image representations that are invariant to the image transformations rather than covariant.

PIRL is related to approaches that learn invariant image representations via contrastive learning [72, 68, 27, 14, 60], clustering [6, 7, 48, 69], or maximizing mutual information [29, 27, 4]. PIRL is most similar to methods that learn representations that are invariant under standard data augmentation [27, 4, 72, 29, 13, 73]. PIRL learns representations that are invariant to both the data augmentation and to the pretext image transformations.

Method Params Transfer Dataset
ImageNet VOC07 Places205 iNat.
Rotation [18] 25.6M 48.9 63.9 41.4 23.0
PIRL (Rotation; ours) 25.6M 60.2 77.1 47.6 31.2
of PIRL - +11.3 +13.2 +6.2 +8.2
Combining pretext tasks using PIRL
PIRL (Jigsaw; ours) 25.6M 62.2 79.8 48.5 31.2
PIRL (Rotation + Jigsaw; ours) 25.6M 63.1 80.3 49.7 33.6
Table 5: Using PIRL with (combinations of) different pretext tasks. Top-1 accuracy / mAP of linear image classifiers trained on PIRL image representations. Top: Performance of PIRL used in combination with the Rotation pretext task [18]. Bottom: Performance of PIRL using a combination of multiple pretext tasks.

Finally, PIRL is also related to approaches that use a contrastive loss [22] in predictive learning [24, 26, 51, 64, 23, 61]. These prior approaches predict missing parts of the data, e.g., future frames in videos [51, 23], or operate on multiple views [64]. In contrast to those approaches, PIRL learns invariances rather than predicting missing data.

6 Discussion and Conclusion

We studied Pretext-Invariant Representation Learning (PIRL) for learning representations that are invariant to image transformations applied in self-supervised pretext tasks. The rationale behind PIRL is that invariance to image transformations maintains semantic information in the representation. We obtain state-of-the-art results on multiple benchmarks for self-supervised learning in image classification and object detection. PIRL even outperforms supervised ImageNet pre-training on object detection.

In this paper, we used PIRL with the Jigsaw and Rotation image transformations. In future work, we aim to extend to richer sets of transformations. We also plan to investigate combinations of PIRL with clustering-based approaches [6, 7]. Like PIRL, those approaches use inter-image statistics but they do so in a different way. A combination of the two approaches may lead to even better image representations.

Acknowledgments:

We are grateful to Rob Fergus, and Andrea Vedaldi for encouragement and their feedback on early versions of the manuscript; Yann LeCun for helpful discussions; Aaron Adcock, Naman Goyal, Priya Goyal, and Myle Ott for their help with the code development for this research; and Rohit Girdhar, and Ross Girshick for feedback on the manuscript. We thank Yuxin Wu, and Kaiming He for help with Detectron2.

References

Supplemental Material

Appendix A Training architecture and
Hyperparameters

Base Architecture for PIRL

PIRL models in Section 3, 4 and 5 in the main paper are standard ResNet-50 [25] models with 25.6 million parameters. Figure 2 and Section 3.2 also use a larger model ‘PIRL-c2x’ which has double the number of channels in each ‘stage’ of the ResNet, e.g., 128 channels in the first convolutional layer, 4096 channels in the final res5 layer etc., and a total of 98 million parameters. The heads and are linear layers that are used for the pre-training stage (Figure 3). When evaluating the model using transfer learning (Section 4 and 5 of the main paper), the heads , are removed.

Training Hyperparameters for Section 3

PIRL models in Section 3 are trained for epochs using the ImageNet training set (1.28 million images). We use a batchsize of 32 per GPU and use a total of 32 GPUs to train each model. We optimize the models using mini-batch SGD with the cosine learning rate decay [40] scheme with an initial learning rate of and a final learning rate of , momentum of and a weight decay of . We use a total of negatives to compute the NCE loss (Equation 2.2) and the negatives are sampled randomly for each data point in the batch.

Training Hyperparameters for Section 4

The hyperparameters used are exactly the same as Section 3, except that the models are trained with negatives and for epochs only. This results in a lower absolute performance, but the main focus of Section 4 is to analyze PIRL.

Common Data Pre-processing

We use data augmentations as implemented in PyTorch and the Python Imaging Library. These data augmentations are used for all methods including PIRL and NPID++. We do not use supervised policies like Fast AutoAugment. Our ‘geometric’ data pre-processing involves extracting a randomly resized crop from the image, and horizontally flipping it. We follow this by ‘photometric’ pre-processing that alter the RGB color values by using random values of color jitter, contrast, brightness, saturation, sharpness, equalize

etc. transforms to the image.

a.1 Details for PIRL with Jigsaw

Data pre-processing

We base our Jigsaw implementation on [19, 33]. To construct , we extract a random resized crop that occupies at least 60% of the image. We resize this crop to , and then divide into a grid. We extract a patch of from each of these grids and get patches. We apply photometric data augmentation (random color jitter, hue, contrast, saturation) to each patch independently and finally obtain the patches that constitute .

The image is a image obtained by applying standard data augmentation (flip, random resized crop, color jitter, hue, contrast, saturation) to the image from the dataset.

Architecture

The 9 patches of are individually input to the network to get their res5 features which are average pooled to get a single

dimensional vector for each patch. We then apply a linear projection to these features to get a

dimensional vector for each patch. These patch features are concatenated to get a dimensional vector which is then input to another single linear layer to get the final dimensional feature .

We feed forward the image and average pool its res5 feature to get a dimensional vector for the image. We then apply a single linear projection to get a dimensional feature .

a.2 Details for PIRL with Rotation

Data pre-processing

We base our Rotation implementation on [18, 33]. To construct we use the standard data augmentation described earlier (geometric + photometric) to get a image, followed by a random rotation from .

The image is a image obtained by applying standard data augmentation (flip, random resized crop, color jitter, hue, contrast, saturation) to the image from the dataset.

Architecture

We feed forward the image , average pool the res5 feature, and apply a single linear layer to get the final dimensional feature for .

We feed forward the image and average pool its res5 feature to get a dimensional vector for the image. We then apply a single linear projection to get a dimensional feature .

Appendix B Object Detection

b.1 VOC07 train+val set for detection

In Section 3.1 of the main paper, we presented object detection results using the VOC07+12 training set which has 16K images. In this section, we use the smaller VOC07 train+val set (5K images) for finetuning the Faster R-CNN C4 detection models. All models are finetuned using the Detectron2 [71] codebase and hyperparameters from [19]. We report the detection AP in Table 6. We see that the PIRL model outperforms the ImageNet supervised pretrained models on the stricter AP and AP metrics without extra pretraining data or changes to the network architecture.

Detection hyperparameters: We use a batchsize of 2 images per GPU, a total of 8 GPUs and finetune models for K iterations with the learning rate dropped by at K iterations. The supervised and Jigsaw baseline models use an initial learning rate of with a linear warmup for iterations with a slope of , while the NPID++ and PIRL models use an initial learning rate of with a linear warmup for iterations and a slope of . We keep the BatchNorm parameters of all models fixed during the detection finetuning stage.

Method Network
Supervised R-50 43.8 74.5 45.9 =0.0
Jigsaw [19] R-50 37.7 64.9 38.0 -7.9
PIRL (ours) R-50 44.7 73.4 47.0 +1.1
NPID [72] R-50  65.4
LA [80] R-50  69.1
Multi-task [10] R-101  70.5
Table 6: Object detection on VOC07 using Faster R-CNN. Detection AP on the VOC07 test set after finetuning Faster R-CNN models with a ResNet-50 backbone pre-trained using self-supervised learning on ImageNet. Results for supervised ImageNet pre-training are presented for reference. All methods are finetuned using the images from the VOC07 train+val set. Numbers with are adopted from the corresponding papers. We see that PIRL outperforms supervised pre-training without extra pre-training data or changes in the network architecture.

b.2 VOC07+12 train set for detection

In Table 1, we use the VOC07+12 training split and the VOC07 test set as used in prior work [51, 19].

Detection hyperparamters: We use a batchsize of 2 images per GPU, a total of 8 GPUs and finetune models for K iterations with the learning rate dropped by at K iterations. The supervised and Jigsaw baseline models use an initial learning rate of with a linear warmup for iterations with a slope of , while the NPID++ and PIRL models use an initial learning rate of with a linear warmup for iterations and a slope of . We keep the BatchNorm parameters of all models fixed during the detection finetuning stage.

Appendix C Linear Models for Transfer

We train linear models on representations from the intermediate layers of a ResNet-50 model following the procedure outlined in [19]. We briefly outline the hyperparameters from their work. The features from each of the layers are average pooled such that they are of about dimensions each. The linear model is trained with mini-batch SGD using a learning rate of decayed by at two equally spaced intervals, momentum of and weight decay of . We train the linear models for epochs on ImageNet (1.28M training images), epochs on Places205 (2.4M training images) and for epochs on iNaturalist-2018 (437K training images). Thus, the number of parameter updates for training the linear models are roughly constant across all these datasets. We report the center crop top-1 accuracy for the ImageNet, Places205 and iNaturalist-2018 datasets in Table 2.

For training linear models on VOC07, we train linear SVMs following the procedure in [19] and report mean average Precision (mAP).

ImageNet Places205
Method conv1 res2 res3 res4 res5 conv1 res2 res3 res4 res5
Random 9.6 13.7 12.0 8.0 5.6 12.9 16.6 15.5 11.6 9.0
ImageNet Supervised 11.6 33.3 48.7 67.9 75.5 14.8 32.6 42.1 50.8 51.5
Places205 Supervised 13.2 31.7 46.0 58.2 51.7 16.7 32.3 43.2 54.7 62.3

Jigsaw [19] 12.4 28.0 39.9 45.7 34.2 15.1 28.8 36.8 41.2 34.4
Coloriz. [19] 10.2 24.1 31.4 39.6 35.2 18.1 28.4 30.2 31.3 30.4
NPID++ [72] 6.6 27.4 38.5 53.1 59 9.7 26.2 34.9 43.7 46.4
PIRL (Ours) 6.2 30 40.1 56.6 63.6 8.9 29 35.8 45.3 49.8

Different Evaluation Setup
Kolesnikov et al[33] 47.7
NPID [72] 15.3 18.8 24.9 40.6 54.0 18.1 22.3 29.7 42.1 45.5

Table 7: ResNet-50 linear evaluation on ImageNet and Places205 datasets: We report the performance of all the layers following the evaluation protocol in [19]. All numbers except NPID++ and Ours are from their respective papers.

Per layer results

The results of all these models are in Table 7.