Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

06/17/2020 ∙ by Mathilde Caron, et al. ∙ 0

Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large number of explicit pairwise feature comparisons, which is computationally challenging. In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, our method simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, we use a swapped prediction mechanism where we predict the code of a view from the representation of another view. Our method can be trained with large and small batches and can scale to unlimited amounts of data. Compared to previous contrastive methods, our method is more memory efficient since it does not require a large memory bank or a special momentum network. In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much. We validate our findings by achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised pretraining on all the considered transfer tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised visual representation learning, or self-supervised learning, aims at obtaining features without using manual annotations and is rapidly closing the performance gap with supervised pretraining in computer vision 

Chen et al. (2020a); He et al. (2019a); Misra and van der Maaten (2019). Many recent state-of-the-art methods build upon the instance discrimination task that considers each image of the dataset (or “instance”) and its transformations as a separate class Dosovitskiy et al. (2016). This task yields representations that are able to discriminate between different images, while achieving some invariance to image transformations. Recent self-supervised methods that use instance discrimination rely on a combination of two elements: (i) a contrastive loss Hadsell et al. (2006) and (ii) a set of image transformations. The contrastive loss removes the notion of instance classes by directly comparing image features while the image transformations define the invariances encoded in the features. Both elements are essential to the quality of the resulting networks Misra and van der Maaten (2019); Chen et al. (2020a) and our work improves upon both the objective function and the transformations.

The contrastive loss explicitly compares pairs of image representations to push away representations from different images while pulling together those from transformations, or views, of the same image. Since computing all the pairwise comparisons on a large dataset is not practical, most implementations approximate the loss by reducing the number of comparisons to random subsets of images during training Chen et al. (2020a); He et al. (2019a); Wu et al. (2018). An alternative to approximate the loss is to approximate the task—that is to relax the instance discrimination problem. For example, clustering-based methods discriminate between groups of images with similar features instead of individual images Caron et al. (2018). The objective in clustering is tractable, but it does not scale well with the dataset as it requires a pass over the entire dataset to form image “codes” (i.e., cluster assignments) that are used as targets during training. In this work, we use a different paradigm and propose to compute the codes online while enforcing consistency between codes obtained from views of the same image. Comparing cluster assignments allows to contrast different image views while not relying on explicit pairwise feature comparisons. Specifically, we propose a simple “swapped” prediction problem where we predict the code of a view from the representation of another view. We learn features by Swapping Assignments between multiple Views of the same image (SwAV). The features and the codes are learned online, allowing our method to scale to potentially unlimited amounts of data. In addition, SwAV works with small and large batch sizes and does not need a large memory bank Wu et al. (2018) or a momentum encoder He et al. (2019a).

Besides our online clustering-based method, we also propose an improvement to the image transformations. Most contrastive methods compare one pair of transformations per image, even though there is evidence that comparing more views during training improves the resulting model Misra and van der Maaten (2019). In this work, we propose multi-crop that uses smaller-sized images to increase the number of views while not increasing the memory or computational requirements during training. We also observe that mapping small parts of a scene to more global views significantly boosts the performance. Directly working with downsized images introduces a bias in the features Touvron et al. (2019), which can be avoided by using a mix of different sizes. Our strategy is simple, yet effective, and can be applied to many self-supervised methods with consistent gain in performance.

We validate our contributions by evaluating our method on several standard self-supervised benchmarks. In particular, on the ImageNet linear evaluation protocol, we reach top- accuracy with a standard ResNet-50, and with a wider model. We also show that our multi-crop strategy is general, and improves the performance of different self-supervised methods, namely SimCLR Chen et al. (2020a), DeepCluster Caron et al. (2018), and SeLa Asano et al. (2020), between and top-1 accuracy on ImageNet. Overall, we make the following contributions:

  • [leftmargin=*]

  • We propose a scalable online clustering loss that improves performance by on ImageNet and works in both large and small batch settings without a large memory bank or a momentum encoder.

  • We introduce the multi-crop strategy to increase the number of views of an image with no computational or memory overhead. We observe a consistent improvement of between and on ImageNet with this strategy on several self-supervised methods.

  • Combining both technical contributions into a single model, we improve the performance of self-supervised by on ImageNet with a standard ResNet and outperforms supervised ImageNet pretraining on multiple downstream tasks. This is the first method to do so without finetuning the features, i.e

    ., only with a linear classifier on top of frozen features.

2 Related Work

Instance and contrastive learning.

Instance-level classification considers each image in a dataset as its own class Bojanowski and Joulin (2017); Dosovitskiy et al. (2016); Wu et al. (2018). Dosovitskiy et alDosovitskiy et al. (2016) assign a class explicitly to each image and learn a linear classifier with as many classes as images in the dataset. As this approach becomes quickly intractable, Wu et alWu et al. (2018)

mitigate this issue by replacing the classifier with a memory bank that stores previously-computed representations. They rely on noise contrastive estimation 

Gutmann and Hyvärinen (2010) to compare instances, which is a special form of contrastive learning Hjelm et al. (2019); Oord et al. (2018). He et alHe et al. (2019a) improve the training of contrastive methods by storing representations from a momentum encoder instead of the trained network. More recently, Chen et alChen et al. (2020a)

show that the memory bank can be entirely replaced with the elements from the same batch if the batch is large enough. In contrast to this line of works, we avoid comparing every pair of images by mapping the image features to a set of trainable prototype vectors.

Clustering for deep representation learning.

Our work is also related to clustering-based methods Asano et al. (2020); Bautista et al. (2016); Caron et al. (2018, 2019); Huang et al. (2019); Xie et al. (2016); Yang et al. (2016); Zhuang et al. (2019); Gidaris et al. (2020); Yan et al. (2020). Caron et alCaron et al. (2018) show that -means assignments can be used as pseudo-labels to learn visual representations. This method scales to large uncurated dataset and can be used for pre-training of supervised networks Caron et al. (2019). However, their formulation is not principled and recently, Asano et alAsano et al. (2020) show how to cast the pseudo-label assignment problem as an instance of the optimal transport problem. We consider a similar formulation to map representations to prototype vectors, but unlike Asano et al. (2020) we keep the soft assignment produced by the Sinkhorn-Knopp algorithm Cuturi (2013) instead of approximating it into a hard assignment. Besides, unlike Caron et alCaron et al. (2018, 2019) and Asano et alAsano et al. (2020), we obtain online assignments which allows our method to scale gracefully to any dataset size.

Handcrafted pretext tasks.

Many self-supervised methods manipulate the input data to extract a supervised signal in the form of a pretext task Doersch et al. (2015); Agrawal et al. (2015); Jenni and Favaro (2018); Kim et al. (2018); Larsson et al. (2016); Mahendran et al. (2018); Misra et al. (2016); Pathak et al. (2017, 2016); Wang and Gupta (2015); Wang et al. (2017); Zhang et al. (2017). We refer the reader to Jing et alJing and Tian (2019) for an exhaustive and detailed review of this literature. Of particular interest, Misra and van der Maaten Misra and van der Maaten (2019) propose to encode the jigsaw puzzle task Noroozi and Favaro (2016) as an invariant for contrastive learning. Jigsaw tiles are non-overlapping crops with small resolution that cover only part () of the entire image area. In contrast, our multi-crop strategy consists in simply sampling multiple random crops with two different sizes: a standard size and a smaller one.

3 Method

Contrastive instance learning Swapping Assignments between Views (Ours)
Figure 1: Contrastive instance learning (left) vs. SwAV (right). In contrastive learning methods applied to instance classification, the features from different transformations of the same images are compared directly to each other. In SwAV, we first obtain “codes” by assigning features to prototype vectors. We then solve a “swapped” prediction problem wherein the codes obtained from one data augmented view are predicted using the other view. Thus, SwAV does not directly compare image features. Prototype vectors are learned along with the ConvNet parameters by backpropragation.

Our goal is to learn visual features in an online fashion without supervision. To that effect, we propose an online clustering-based self-supervised method. Typical clustering-based methods Asano et al. (2020); Caron et al. (2018) are offline in the sense that they alternate between a cluster assignment step where image features of the entire dataset are clustered, and a training step where the cluster assignments (or “codes”) are predicted for different image views. Unfortunately, these methods are not suitable for online learning as they require multiple passes over the dataset to compute the image features necessary for clustering. In this section, we describe an alternative where we enforce consistency between codes from different augmentations of the same image. This solution is inspired by contrastive instance learning Wu et al. (2018) as we do not consider the codes as a target, but only enforce consistent mapping between views of the same image. Our method can be interpreted as a way of contrasting between multiple image views by comparing their cluster assignments instead of their features.

More precisely, we compute a code from an augmented version of the image and predict this code from other augmented versions of the same image. Given two image features and from two different augmentations of the same image, we compute their codes and by matching these features to a set of prototypes

. We then setup a “swapped” prediction problem with the following loss function:

(1)

where the function  measures the fit between features and a code , as detailed later. Intuitively, our method compares the features and using the intermediate codes and . If these two features capture the same information, it should be possible to predict the code from the other feature. A similar comparison appears in contrastive learning where features are compared directly Wu et al. (2018). In fig. 1, we illustrate the relation between contrastive learning and our method.

3.1 Online clustering

Each image is transformed into an augmented view by applying a transformation sampled from the set of image transformations. The augmented view is mapped to a vector representation by applying a non-linear mapping to . The feature is then projected to the unit sphere, i.e.,  . We then compute a code from this feature by mapping to a set of trainable prototypes vectors, . We denote by the matrix whose columns are the . We now describe how to compute these codes and update the prototypes online.

Swapped prediction problem.

The loss function in Eq. (1) has two terms that setup the “swapped” prediction problem of predicting the code from the feature , and from

. Each term represents the cross entropy loss between the code and the probability obtained by taking a softmax of the dot products of

and all prototypes in , i.e.,

(2)

where is a temperature parameter Wu et al. (2018). Taking this loss over all the images and pairs of data augmentations leads to the following loss function for the swapped prediction problem:

This loss function is jointly minimized with respect to the prototypes and the parameters of the image encoder used to produce the features .

Computing codes online.

In order to make our method online, we compute the codes using only the image features within a batch. We compute codes using the prototypes such that all the examples in a batch are equally partitioned by the prototypes. This equipartition constraint ensures that the codes for different images in a batch are distinct, thus preventing the trivial solution where every image has the same code. Given feature vectors , we are interested in mapping them to the prototypes . We denote this mapping or codes by , and optimize to maximize the similarity between the features and the prototypes , i.e.,

(3)

where is the entropy function, and is a parameter that controls the smoothness of the mapping. Asano et alAsano et al. (2020) enforce an equal partition by constraining the matrix to belong to the transportation polytope. They work on the full dataset, and we propose to adapt their solution to work on minibatches by restricting the transportation polytope to the minibatch:

(4)

where denotes the vector of ones in dimension . These constraints enforce that on average each prototype is selected at least times in the batch.

Once a continuous solution to Prob. (3) is found, a discrete code can be obtained by using a rounding procedure Asano et al. (2020). Empirically, we found that discrete codes work well when computing codes in an offline manner on the full dataset as in Asano et alAsano et al. (2020). However, in the online setting where we use only minibatches, using the discrete codes performs worse than using the continuous codes. An explanation is that the rounding needed to obtain discrete codes is a more aggressive optimization step than gradient updates. While it makes the model converge rapidly, it leads to a worse solution. We thus preserve the soft code instead of rounding it. These soft codes are the solution of Prob. (3) over the set and takes the form of a normalized exponential matrix Cuturi (2013):

(5)

where and are renormalization vectors in and respectively. The renormalization vectors are computed using a small number of matrix multiplications using the iterative Sinkhorn-Knopp algorithm Cuturi (2013). In practice, we observe that using only iterations is fast and sufficient to obtain good performance. Indeed, this algorithm can be efficiently implemented on GPU, and the alignment of K features to K codes takes ms in our experiments, see § 4.

Working with small batches.

When the number of batch features is too small compared to the number of prototypes , it is impossible to equally partition the batch into the prototypes. Therefore, when working with small batches, we use features from the previous batches to augment the size of in Prob. (3). Then, we only use the codes of the batch features in our training loss. In practice, we store around K features, i.e., in the same range as the number of code vectors. This means that we only keep features from the last batches with a batch size of , while contrastive methods typically need to store the last K instances obtained from the last batches He et al. (2019a).

3.2 Multi-crop: Augmenting views with smaller images

As noted in prior works Chen et al. (2020a); Misra and van der Maaten (2019), comparing random crops of an image plays a central role by capturing information in terms of relations between parts of a scene or an object. Unfortunately, increasing the number of crops or “views” quadratically increases the memory and compute requirements. We propose a multi-crop strategy where we use two standard resolution crops and sample additional low resolution crops that cover only small parts of the image. Using low resolution images ensures only a small increase in the compute cost. Specifically, we generalize the loss of Eq (1):

(6)

Note that we compute codes using only the full resolution crops. Indeed, computing codes for all crops increases the computational time and we observe in practice that it also alters the transfer performance of the resulting network. An explanation is that using only partial information (small crops cover only small area of images) degrades the assignment quality. Figure 3 shows that multi-crop improves the performance of several self-supervised methods and is a promising augmentation strategy.

Method Arch. Param. Top1 Supervised R50 24 76.5 Colorization Zhang et al. (2016) R50 24 39.6 Jigsaw Noroozi and Favaro (2016) R50 24 45.7 NPID Wu et al. (2018) R50 24 54.0 BigBiGAN Donahue and Simonyan (2019) R50 24 56.6 LA Zhuang et al. (2019) R50 24 58.8 NPID++ Misra and van der Maaten (2019) R50 24 59.0 MoCo He et al. (2019a) R50 24 60.6 SeLa Asano et al. (2020) R50 24 61.5 PIRL Misra and van der Maaten (2019) R50 24 63.6 CPC v2 Hénaff et al. (2019) R50 24 63.8 PCL Li et al. (2020) R50 24 65.9 SimCLR Chen et al. (2020a) R50 24 70.0 MoCov2 Chen et al. (2020b) R50 24 71.1 SwAV R50 24 75.3
Figure 2: Linear classification on ImageNet. Top-1 accuracy for linear models trained on frozen features from different self-supervised methods. (left) Performance with a standard ResNet-50. (right) Performance as we multiply the width of a ResNet-50 by a factor , , and .

4 Main Results

We analyze the features learned by SwAV by transfer learning on multiple datasets. We implement in SwAV the improvements used in SimCLR,

i.e., LARS You et al. (2017), cosine learning rate Loshchilov and Hutter (2016); Misra and van der Maaten (2019) and the MLP projection head Chen et al. (2020a)

. We provide the full details and hyperparameters for pretraining and transfer learning in the Appendix.

4.1 Evaluating the unsupervised features on ImageNet

We evaluate the features of a ResNet-50 He et al. (2016)

trained with SwAV on ImageNet by two experiments: linear classification on frozen features and semi-supervised learning by finetuning with few labels. When using frozen features (

fig. 2 left), SwAV outperforms the state of the art by top-1 accuracy and is only below the performance of a fully supervised model. Note that we train SwAV during epochs with large batches (). We refer to fig. 3 for results with shorter trainings and to table 3 for experiments with small batches. On semi-supervised learning (table 1), SwAV outperforms other self-supervised methods and is on par with state-of-the-art semi-supervised models Sohn et al. (2020), despite the fact that SwAV is not specifically designed for semi-supervised learning.

1% labels 10% labels
Method Top-1 Top-5 Top-1 Top-5
Supervised 25.4 48.4 56.4 80.4
Methods using label-propagation UDA Xie et al. (2020) - - *68.8* *88.5*
FixMatch Sohn et al. (2020) - - *71.5* *89.1*
Methods using self-supervision only PIRL Misra and van der Maaten (2019) 30.7 57.2 60.4 83.8
PCL Li et al. (2020) - 75.6 - 86.2
SimCLR Chen et al. (2020a) 48.3 75.5 65.6 87.8
SwAV 53.9 78.5 70.2 89.9
Table 1: Semi-supervised learning on ImageNet with a ResNet-50. We finetune the model with and labels and report top-1 and top-5 accuracies. *: uses RandAugment Cubuk et al. (2019).

Variants of ResNet-50. Figure 2 (right) shows the performance of multiple variants of ResNet-50 with different widths Kolesnikov et al. (2019). The performance of our model increases with the width of the model, and follows a similar trend to the one obtained with supervised learning. When compared with concurrent work like SimCLR, we see that SwAV reduces the difference with supervised models even further. Indeed, for large architectures, our method shrinks the gap with supervised training to .

Linear Classification Object Detection
Places205 VOC07 iNat18 VOC07+12 (Faster R-CNN) COCO (DETR)
Supervised 53.2 87.5 46.7 81.3 40.8
SwAV 56.7 88.9 48.6 82.6 42.1
Table 2: Transfer learning on downstream tasks. Comparison between features from ResNet-50 trained on ImageNet with SwAV or supervised learning. We consider two settings. (1) Linear classification on top of frozen features. We report top-1 accuracy on all datasets except VOC07 where we report mAP. (2) Object detection with finetuned features on VOC07+12 trainval using Faster R-CNN Ren et al. (2015) and on COCO Lin et al. (2014) using DETR Carion et al. (2020). We report the most standard detection metrics for these datasets: on VOC07+12 and AP on COCO.

4.2 Transferring unsupervised features to downstream tasks

We test the generalization of ResNet-50 features trained with SwAV on ImageNet (without labels) by transferring to several downstream vision tasks. In table 2, we compare the performance of SwAV features with ImageNet supervised pretraining. First, we report the linear classification performance on the Places205 Zhou et al. (2014), VOC07 Everingham et al. (2010), and iNaturalist2018 Van Horn et al. (2018) datasets. Our method outperforms supervised features on all three datasets. Note that SwAV is the first self-supervised method to surpass ImageNet supervised features on these datasets. Second, we report network finetuning on object detection on VOC07+12 using Faster R-CNN Ren et al. (2015) and on COCO Lin et al. (2014) with DETR Carion et al. (2020). DETR is a recent object detection framework that reaches competitive performance with Faster R-CNN while being conceptually simpler and trainable end-to-end. We use DETR because, unlike Faster R-CNN He et al. (2019b), using a pretrained backbone in this framework is crucial to obtain good results compared to training from scratch Carion et al. (2020). In table 2, we show that SwAV outperforms the supervised pretrained model on both VOC07+12 and COCO datasets. Note that this is line with previous works that also show that self-supervision can outperform supervised pretraining on object detection Misra and van der Maaten (2019); He et al. (2019a); Gidaris et al. (2020)

. We report more detection evaluation metrics and results from other self-supervised methods in the Appendix. Overall, our SwAV ResNet-50 model surpasses supervised ImageNet pretraining on all the considered transfer tasks and datasets. We will release this model so other researchers might also benefit by replacing the ImageNet supervised network with our model.

4.3 Training with small batches

We train SwAV with small batches of images on GPUs and compare with MoCov2 and SimCLR trained in the same setup. In table 3, we see that SwAV maintains state-of-the-art performance even when trained in the small batch setting. Note that SwAV only stores a queue of features. In comparison, to obtain good performance, MoCov2 needs to store features while keeping an additional momentum encoder network. When SwAV is trained using crops, SwAV has a running time higher than SimCLR with crops and is around slower than MoCov2 due to the additional back-propagation Chen et al. (2020b). However, as shown in table 3, SwAV learns much faster and reaches higher performance in fewer epochs: after epochs while MoCov2 needs epochs to achieve . Increasing the resolution and the number of epochs, SwAV reaches with a small number of stored features and no momentum encoder.

Method Mom. Encoder Stored Features multi-crop epoch batch Top-1
SimCLR
MoCov2
MoCov2
SwAV +
SwAV +
SwAV +
Table 3: Training in small batch setting. Top-1 accuracy on ImageNet with a linear classifier trained on top of frozen features from a ResNet-50. All methods are trained with a batch size of . We also report the number of stored features, the type of cropping used and the number of epochs.

5 Ablation Study

Applying the multi-crop strategy to different methods.

In fig. 3 (left), we report the impact of applying our multi-crop strategy on the performance of a selection of other methods. Besides SwAV, we consider supervised learning, SimCLR and two clustering-based models, DeepCluster-v2 and SeLa-v2. The last two are obtained by applying the improvements of SimCLR to DeepCluster Caron et al. (2018) and SeLa Asano et al. (2020) (see details in the Appendix). We see that the multi-crop strategy consistently improves the performance for all the considered methods by a significant margin of top-1 accuracy. Interestingly, multi-crop seems to benefit more clustering-based methods than contrastive methods. We note that multi-crop does not improve the supervised model.

Figure 3 (left) also allows us to compare clustering-based and contrastive instance methods. First, we observe that SwAV and DeepCluster-v2 outperform SimCLR by both with and without multi-crop. This suggests the learning potential of clustering-based methods over instance classification. Finally, we see that SwAV performs on par with offline clustering-based approaches, that use the entire dataset to learn prototypes and codes.

Top-1 Method 2x224 2x160+4x96 Supervised SimCLR SeLa-v2 DeepCluster-v2 SwAV
Figure 3: Top-1 accuracy on ImageNet with a linear classifier trained on top of frozen features from a ResNet-50. (left) Impact of multi-crop and comparison between clustering-based and contrastive instance methods. Self-supervised methods are trained for epochs and supervised models for epochs. (right) Performance as a function of epochs. We compare SwAV models trained with different number of epochs and report their running time based on our implementation.
Impact of longer training.

In fig. 3 (right), we show the impact of the number of training epochs on performance for SwAV with multi-crop. We train separate models for , , and epochs and report the top-1 accuracy on ImageNet using the linear classification evaluation. We train each ResNet-50 on V100 16GB GPUs and a batch size of . While SwAV benefits from longer training, it already achieves strong performance after epochs, i.e., in 6h15.

Unsupervised pretraining on a large uncurated dataset.

We test if SwAV can serve as a pretraining method for supervised learning and also check its robustness on uncurated pretraining data. We pretrain SwAV on an uncurated dataset of 1 billion random public non-EU images from Instagram. In fig. 4 (left), we measure the performance of ResNet-50 models when transferring to ImageNet with frozen or finetuned features. We report the results from He et alHe et al. (2019a) but note that their setting is different. They use a curated set of Instagram images, filtered by hashtags similar to ImageNet labels Mahajan et al. (2018). We compare SwAV with a randomly initialized network and with a network pretrained on the same data using SimCLR. We observe that SwAV maintains a similar gain of over SimCLR as when pretrained on ImageNet (fig. 2), showing that our improvements do not depend on the data distribution. We also see that pretraining with SwAV on random images significantly improves over training from scratch on ImageNet (Caron et al. (2019); He et al. (2019a). In fig. 4 (right), we explore the limits of pretraining as we increase the model capacity. We consider the variants of the ResNeXt architecture Xie et al. (2017) as in Mahajan et alMahajan et al. (2018). We compare SwAV with supervised models trained from scratch on ImageNet. For all models, SwAV outperforms training from scratch by a significant margin showing that it can take advantage of the increased model capacity. For reference, we also include the results from Mahajan et alMahajan et al. (2018) obtained with a weakly-supervised model pretrained by predicting hashtags filtered to be similar to ImageNet classes. Interestingly, SwAV performance is strong when compared to this topline despite not using any form of supervision or filtering of the data.

Method Frozen Finetuned Random 15.0 76.5 MoCo - *77.3* SimCLR 60.4 77.2 SwAV 66.5 77.8
Figure 4: Pretraining on uncurated data. Top-1 accuracy on ImageNet for pretrained models on an uncurated set of 1B random Instagram images. (left) We compare ResNet-50 pretrained with either SimCLR or SwAV on two downstream tasks: linear classification on frozen features or finetuned features. (right) Performance of finetuned models as we increase the capacity of a ResNext following Mahajan et al. (2018). The capacity is provided in billions of Mult-Add operations.
*: pretrained on a curated set of B Instagram images filtered with k hashtags similar to ImageNet classes.

6 Discussion

Self-supervised learning is rapidly progressing compared to supervised learning, even surpassing it on transfer learning, even though the current experimental settings are designed for supervised learning. In particular, architectures have been designed for supervised tasks, and it is not clear if the same models would emerge from exploring architectures with no supervision. Several recent works have shown that exploring architectures with search Liu et al. (2020) or pruning Caron et al. (2020) is possible without supervision, and we plan to evaluate the ability of our method to guide model explorations.

Acknowledgement.

We thank Nicolas Carion, Kaiming He, Herve Jegou, Benjamin Lefaudeux, Thomas Lucas, Francisco Massa, Sergey Zagoruyko, and the rest of Thoth and FAIR teams for their help and fruitful discussions. Julien Mairal was funded by the ERC grant number 714381 (SOLARIS project) and by ANR 3IA MIAI@Grenoble Alpes (ANR-19-P3IA-0003).

References

  • P. Agrawal, J. Carreira, and J. Malik (2015) Learning to see by moving. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
  • Y. M. Asano, C. Rupprecht, and A. Vedaldi (2020) Self-labelling via simultaneous clustering and representation learning. International Conference on Learning Representations (ICLR). Cited by: §1, §2, Figure 2, §3.1, §3.1, §3, §5, §D.2, §D.2, §D.2, §D.
  • P. Bachman, R. D. Hjelm, and W. Buchwalter (2019) Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: Table 5.
  • M. A. Bautista, A. Sanakoyeu, E. Tikhoncheva, and B. Ommer (2016) Cliquecnn: deep unsupervised exemplar learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2.
  • P. Bojanowski and A. Joulin (2017) Unsupervised learning by predicting noise. In

    Proceedings of the International Conference on Machine Learning (ICML)

    ,
    Cited by: §2, §C.2.
  • N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko (2020) End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872. Cited by: §4.2, Table 2, §A.5, §B.3, §B.4, Table 6, Table 8.
  • M. Caron, P. Bojanowski, A. Joulin, and M. Douze (2018) Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1, §1, §2, §3, §5, §C.2, §D.2, §D.
  • M. Caron, P. Bojanowski, J. Mairal, and A. Joulin (2019) Unsupervised pre-training of image features on non-curated data. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2, §5.
  • M. Caron, A. Morcos, P. Bojanowski, J. Mairal, and A. Joulin (2020)

    Pruning convolutional neural networks with self-supervision

    .
    arXiv preprint arXiv:2001.03554. Cited by: §6.
  • T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020a) A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §1, §1, §1, §2, Figure 2, §3.2, Table 1, §4, §A.1, §A.2, Table 5, Table 6.
  • X. Chen, H. Fan, R. Girshick, and K. He (2020b) Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297. Cited by: Figure 2, §4.3, §A.1, §A.5, Table 7.
  • E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le (2019) RandAugment: practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719. Cited by: Table 1.
  • M. Cuturi (2013) Sinkhorn distances: lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2, §3.1, §C.4.
  • C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
  • J. Donahue and K. Simonyan (2019) Large scale adversarial representation learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: Figure 2, Table 5.
  • A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox (2016) Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence 38 (9), pp. 1734–1747. Cited by: §1, §2.
  • M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §4.2.
  • R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin (2008) LIBLINEAR: a library for large linear classification. Journal of machine learning research. Cited by: §A.5.
  • S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord (2020) Learning representations by predicting bags of visual words. arXiv preprint arXiv:2002.12247. Cited by: §2, §4.2, Table 6, Table 7.
  • S. Gidaris, P. Singh, and N. Komodakis (2018) Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), Cited by: Table 5.
  • P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He (2017) Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §A.1.
  • M. Gutmann and A. Hyvärinen (2010) Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    ,
    Cited by: §2.
  • R. Hadsell, S. Chopra, and Y. LeCun (2006) Dimensionality reduction by learning an invariant mapping. In

    Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §1.
  • K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2019a) Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Cited by: §1, §1, §2, Figure 2, §3.1, §4.2, §5, §A.5, Table 5, Table 6, Table 7.
  • K. He, R. Girshick, and P. Dollár (2019b) Rethinking imagenet pre-training. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §4.2, §A.5.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  • O. J. Hénaff, A. Razavi, C. Doersch, S. Eslami, and A. v. d. Oord (2019) Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272. Cited by: Figure 2, Table 5.
  • R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio (2019) Learning deep representations by mutual information estimation and maximization. International Conference on Learning Representations (ICLR). Cited by: §2.
  • J. Huang, Q. Dong, and S. Gong (2019)

    Unsupervised deep learning by neighbourhood discovery

    .
    In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
  • S. Jenni and P. Favaro (2018) Self-supervised feature learning by learning to spot artifacts. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • L. Jing and Y. Tian (2019) Self-supervised visual feature learning with deep neural networks: a survey. arXiv preprint arXiv:1902.06162. Cited by: §2.
  • D. Kim, D. Cho, D. Yoo, and I. S. Kweon (2018) Learning image representations by completing damaged jigsaw puzzles. In Winter Conference on Applications of Computer Vision (WACV), Cited by: §2.
  • A. Kolesnikov, X. Zhai, and L. Beyer (2019) Revisiting self-supervised visual representation learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
  • G. Larsson, M. Maire, and G. Shakhnarovich (2016) Learning representations for automatic colorization. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.
  • J. Li, P. Zhou, C. Xiong, R. Socher, and S. C. Hoi (2020) Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966. Cited by: Figure 2, Table 1, §A.1, §A.4, Table 10, Table 6.
  • T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §4.2, Table 2, §A.5, Table 6.
  • C. Liu, P. Dollár, K. He, R. Girshick, A. Yuille, and S. Xie (2020) Are labels necessary for neural architecture search?. arXiv preprint arXiv:2003.12056. Cited by: §6.
  • I. Loshchilov and F. Hutter (2016)

    Sgdr: stochastic gradient descent with warm restarts

    .
    arXiv preprint arXiv:1608.03983. Cited by: §4, §A.1, §A.6.
  • D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten (2018) Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: Figure 4, §5.
  • A. Mahendran, J. Thewlis, and A. Vedaldi (2018) Cross pixel optical flow similarity for self-supervised learning. arXiv preprint arXiv:1807.05636. Cited by: §2.
  • P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, et al. (2017) Mixed precision training. arXiv preprint arXiv:1710.03740. Cited by: §A.1.
  • I. Misra and L. van der Maaten (2019) Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991. Cited by: §1, §1, §2, Figure 2, §3.2, §4.2, Table 1, §4, §A.1, §A.5, Table 6, Table 7.
  • I. Misra, C. L. Zitnick, and M. Hebert (2016) Shuffle and learn: unsupervised learning using temporal order verification. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.
  • M. Noroozi and P. Favaro (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, Figure 2.
  • A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §2.
  • D. Pathak, R. Girshick, P. Dollár, T. Darrell, and B. Hariharan (2017) Learning features by watching objects move. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §4.2, Table 2, §A.5, §B.3, §B.4, Table 6, Table 7.
  • K. Sohn, D. Berthelot, C. Li, Z. Zhang, N. Carlini, E. D. Cubuk, A. Kurakin, H. Zhang, and C. Raffel (2020) Fixmatch: simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685. Cited by: §4.1, Table 1.
  • Y. Tian, D. Krishnan, and P. Isola (2019) Contrastive multiview coding. arXiv preprint arXiv:1906.05849. Cited by: Table 5.
  • H. Touvron, A. Vedaldi, M. Douze, and H. Jégou (2019) Fixing the train-test resolution discrepancy. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
  • G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie (2018) The inaturalist species classification and detection dataset. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2.
  • X. Wang and A. Gupta (2015) Unsupervised learning of visual representations using videos. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
  • X. Wang, K. He, and A. Gupta (2017) Transitive invariance for self-supervised visual representation learning. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
  • Y. Wu, A. Kirillov, F. Massa, W. Lo, and R. Girshick (2019) Detectron2. Note: https://github.com/facebookresearch/detectron2 Cited by: §A.5.
  • Z. Wu, Y. Xiong, S. X. Yu, and D. Lin (2018) Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, Figure 2, §3.1, §3, §3, §B.6, §D.2, Table 10.
  • J. Xie, R. Girshick, and A. Farhadi (2016)

    Unsupervised deep embedding for clustering analysis

    .
    In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
  • Q. Xie, Z. D. Dai, E. Hovy, M. Luong, and Q. V. Le (2020) Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. Cited by: Table 1.
  • S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017) Aggregated residual transformations for deep neural networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §5.
  • X. Yan, I. Misra, A. Gupta, D. Ghadiyaram, and D. Mahajan (2020) ClusterFit: improving generalization of visual representations. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • J. Yang, D. Parikh, and D. Batra (2016) Joint unsupervised learning of deep representations and image clusters. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • Y. You, I. Gitman, and B. Ginsburg (2017) Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888. Cited by: §4, §A.1, §A.6.
  • R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: Figure 2.
  • R. Zhang, P. Isola, and A. A. Efros (2017)

    Split-brain autoencoders: unsupervised learning by cross-channel prediction

    .
    In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva (2014)

    Learning deep features for scene recognition using places database

    .
    In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §4.2.
  • C. Zhuang, A. L. Zhai, and D. Yamins (2019) Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2, Figure 2, §B.6, Table 10.

Appendix

A Implementation Details

In this section, we provide the details and hyperparameters for SwAV pretraining and transfer learning. We are planning to opensource our code in order to facilitate the reproduction of our work.

a.1 Implementation details of SwAV training

First, we provide a pseudo-code for SwAV training loop using two crops in Pytorch style:



# C: prototypes (DxK)
# model: convnet + projection head
# temp: temperature

for x in loader: # load a batch x with B samples
     x_t = t(x) # t is a random augmentation
     x_s = s(x) # s is a another random augmentation

     z = model(cat(x_t, x_s)) # embeddings: 2BxD

     scores = mm(z, C) # prototype scores: 2BxK
     scores_t = scores[:B]
     scores_s = scores[B:]

     # compute assignments
     

with torch.no_grad():


          q_t = sinkhorn(scores_t)
          q_s = sinkhorn(scores_s)

     # convert scores to probabilities
     p_t = Softmax(scores_t / temp)
     p_s = Softmax(scores_s / temp)

     # swap prediction problem
     loss = - 0.5 * mean(q_t * log(p_s) + q_s * log(p_t))

     # SGD update: network and prototypes
     loss.backward()
     update(model.params)
     update(C)

     # normalize prototypes
     with torch.no_grad():
          C = normalize(C, dim=0, p=2)

# Sinkhorn-Knopp
def sinkhorn(scores, eps=0.05, niters=3):
     Q = exp(scores / eps).T
     Q /= sum(Q)
     K, B = Q.shape
     u, r, c = zeros(K), ones(K) / K, ones(B) / B
     for _ in range(niters):
          u = sum(Q, dim=1)
          Q *= (r / u).unsqueeze(1)
          Q *= (c / sum(Q, dim=0)).unsqueeze(0)
     return (Q / sum(Q, dim=0, keepdim=True)).T

Most of our training hyperparameters are directly taken from SimCLR work Chen et al. (2020a). We train SwAV with stochastic gradient descent using large batches of different instances. We distribute the batches over V Gb GPUs, resulting in each GPU treating instances. The temperature parameter is set to and the Sinkhorn regularization parameter is set to for all runs. We use a weight decay of , LARS optimizer You et al. (2017) and a learning rate of which is linearly ramped up during the first epochs. After warmup, we use the cosine learning rate decay Loshchilov and Hutter (2016); Misra and van der Maaten (2019) with a final value of

. To help the very beginning of the optimization, we freeze the prototypes during the first epoch of training. We synchronize batch-normalization layers across GPUs using the optimized implementation with kernels through CUDA/C-v2 extension from

apex000github.com/NVIDIA/apex. We also use apex library for training with mixed precision Micikevicius et al. (2017). Overall, thanks to these training optimizations (mixed precision, kernel batch-normalization and use of large batches Goyal et al. (2017)), epochs of training for our best SwAV model take approximately hours (see table 4). Similarly to previous works Chen et al. (2020a, b); Li et al. (2020), we use a projection head on top of the convnet features that consists in a

-layer multi-layer perceptron (MLP) that projects the convnet output to a

-D space.

Note that SwAV is more suitable for a multi-node distributed implementation compared to contrastive approaches SimCLR or MoCo. The latter methods require sharing the feature matrix across all GPUs at every batch which might become a bottleneck when distributing across many GPUs. On the contrary, SwAV requires sharing only matrix normalization statistics (sum of rows and columns) during the Sinkhorn algorithm.

a.2 Data augmentation used in SwAV

We obtain two different views from an image by performing crops of random sizes and aspect ratios. Specifically we use the RandomResizedCrop method from torchvision.transforms module of PyTorch with the following scaling parameters: s=(0.14, 1). Note that we sample crops in a narrower range of scale compared to the default RandomResizedCrop parameters. Then, we resize both full resolution views to pixels, unless specified otherwise (we use resolutions in some of our experiments). Besides, we obtain additional views by cropping small parts in the image. To do so, we use the following RandomResizedCrop parameters: s=(0.05, 0.14). We resize the resulting crops to resolution. Note that we always deal with resolutions that are divisible by to avoid roundings in the ResNet- pooling layers. Finally, we apply random horizontal flips, color distortion and Gaussian blur to each resulting crop, exactly following the SimCLR implementation Chen et al. (2020a). An illustration of our multi-crop augmentation strategy can be viewed in fig. 5.

Figure 5: Multi-crop: the image is transformed into views: two global views and small resolution zoomed views.

a.3 Implementation details of linear classification on ImageNet with ResNet-50

We obtain top-1 accuracy on ImageNet by training a linear classifier on top of frozen final representations (-D) of a ResNet- trained with SwAV. This linear layer is trained during epochs, with a learning rate of and a weight decay of . We use cosine learning rate decay and a batch size of . We use standard data augmentations, i.e., cropping of random sizes and aspect ratios (default parameters of RandomResizedCrop) and random horizontal flips.

a.4 Implementation details of semi-supervised learning (finetuning with 1% or 10% labels)

We finetune with either 1% or 10% of ImageNet labeled images a ResNet- pretrained with SwAV. We use the 1% and 10% splits specified in the official code release of SimCLR. We mostly follow hyperparameters from PCL Li et al. (2020): we train during epochs with a batch size of , we use distinct learning rates for the convnet weights and the final linear layer, and we decay the learning rates by a factor at epochs and . We do not apply any weight decay during finetuning. For 1% finetuning, we use a learning rate of for the trunk and for the final layer. For 10% finetuning, we use a learning rate of for the trunk and for the final layer.

a.5 Implementation details of transfer learning on downstream tasks

Linear classifiers. We mostly follow PIRL Misra and van der Maaten (2019) for training linear models on top of representations given by a ResNet-50 pretrained with SwAV. On VOC07, all images are resized to 256 pixels along the shorter side, before taking a center crop. Then, we train a linear SVM with LIBLINEAR Fan et al. (2008) on top of corresponding global average pooled final representations (-D). For linear evaluation on other datasets (Places205 and iNat18), we train linear models with stochastic gradient descent using a batch size of , a learning rate of reduced by a factor of three times (equally spaced intervals), weight decay of and momentum of . On Places205, we train the linear models for epochs and on iNat18 for epochs. We report the top-1 accuracy computed using the center crop on the validation set.

Object Detection on VOC07+12. We use a Faster R-CNN Ren et al. (2015) model as implemented in Detectron2 Wu et al. (2019) and follow the finetuning protocol from He et alHe et al. (2019a) making the following changes to the hyperparameters – our initial learning rate is which is warmed with a slope (WARMUP_FACTOR flag in Detectron2) of for iterations. Other training hyperparamters are kept exactly the same as in He et alHe et al. (2019a), i.e., batchsize of across GPUs, training for K iterations on VOC07+12 trainval with the learning rate reduced by a factor of after K and K iterations, using SyncBatchNorm to finetune BatchNorm parameters, and adding an extra BatchNorm layer after the res5 layer (Res5ROIHeadsExtraNorm head in Detectron2). We report results on VOC07 test set averaged over independant runs.

Object Detection on COCO. We test the generalization of our ResNet-50 features trained on ImageNet with SwAV by transferring them to object detection on COCO dataset Lin et al. (2014) with DETR framework Carion et al. (2020). DETR is a recent object detection framework that relies on a transformer encoder-decoder architecture. It reaches competitive performance with Faster R-CNN while being conceptually simpler and trainable end-to-end. Interestingly, unlike other frameworks He et al. (2019b), current results with DETR have shown that using a pretrained backbone is crucial to obtain good results compared to training from scratch. Therefore, we investigate if we can boost DETR performance by using features pretrained on ImageNet with SwAV instead of standard supervised features. We also evaluate features from MoCov2 Chen et al. (2020b) pretraining. We train DETR during epochs with AdamW, we use a learning rate of for the transformer and apply a weight decay of . We select for each method the best learning rate for the backbone among the following three values: , and . We decay the learning rates by a factor after epochs.

a.6 Implementation details of training with small batches of 256 images

We start using a queue composed of the feature representations from previous batches after epochs of training. Indeed, we find that using the queue before epochs disturbs the convergence of the model since the network is changing a lot from an iteration to another during the first epochs. We simulate large batches of size by storing the last batches, that is vectors of dimension . We use a weight decay of , LARS optimizer You et al. (2017) and a learning rate of . We use the cosine learning rate decay Loshchilov and Hutter (2016) with a final value of .

a.7 Implementation details of ablation studies

In our ablation studies (results in Table of the main paper for example), we choose to follow closely the data augmentation used in concurrent work SimCLR. This allows a fair comparison and importantly, isolates the effect of our contributions. In practice, it means that we use the default parameters of the random crop method (RandomResizedCrop), s=(0.08, 1) instead of s=(0.14, 1), when sampling the two large resolution views.

B Additional Results

b.1 Running times

In table 4, we report compute and GPU memory requirements based on our implementation for different settings. As described in § A.1, we train each method on V100 16GB GPUs, with a batch size of , using mixed precision and apex optimized version of synchronized batch-normalization layers. We report results with ResNet- for all methods. In fig. 6, we report SwAV performance for different training lengths measured in hours based on our implementation. We observe that after only hours of training, SwAV outperforms SimCLR trained for epochs ( hours based on our implementation) by a large margin. If we train SwAV for longer, we see that the performance gap between the two methods increases even more.

Method multi-crop time / 100 epochs peak memory / GPU
SimCLR 4h00 8.6G
SwAV 4h09 8.6G
SwAV + 4h50 8.5G
SwAV + 6h15 12.8G
Table 4: Computational cost. We report time and GPU memory requirements based on our implementation for different models trained during epochs.
Figure 6: Influence of longer training. Top-1 ImageNet accuracy for linear models trained on frozen features. We report SwAV performance for different training lengths measured in hours based on our implementation. We train each ResNet- models on V100 16GB GPUs with a batch size of (see § A.1 for implementation details).

b.2 Larger architectures

In table 5, we show results when training SwAV on large architectures. We observe that SwAV benefits from training on large architectures and plan to explore in this direction to furthermore boost self-supervised methods.

Method Arch. Param. Top1
Supervised EffNet-B7 66 84.4
Rotation Gidaris et al. (2018) RevNet50-4w 86 55.4
BigBiGAN Donahue and Simonyan (2019) RevNet50-4w 86 61.3
AMDIM Bachman et al. (2019) Custom-RN 626 68.1
CMC Tian et al. (2019) R50-w2 188 68.4
MoCo He et al. (2019a) R50-w4 375 68.6
CPC v2 Hénaff et al. (2019) R161 305 71.5
SimCLR Chen et al. (2020a) R50-w4 375 76.8
SwAV R50-w4 375 77.9
SwAV R50-w5 586 78.5
Table 5: Large architectures. Top-1 accuracy for linear models trained on frozen features from different self-supervised methods on large architectures.

b.3 Transferring unsupervised features to downstream tasks

In table 6, we expand results from the main paper by providing numbers from previously and concurrently published self-supervised methods. In the left panel of table 6, we show performance after training a linear classifier on top of frozen representations on different datasets while on the right panel we evaluate the features by finetuning a ResNet- on object detection with Faster R-CNN Ren et al. (2015) and DETR Carion et al. (2020). Overall, we observe on table 6 that SwAV is the first self-supervised method to outperform ImageNet supervised backbone on all the considered transfer tasks and datasets. Other self-supervised learners are capable of surpassing the supervised counterpart but only for one type of transfer (object detection with finetuning for MoCo/PIRL for example). We will release this model so other researchers might also benefit by replacing the ImageNet supervised network with our model.

Linear Classification Object Detection
Places205 VOC07 iNat18 VOC07+12 (Faster R-CNN) COCO (DETR)
Supervised
RotNet Gidaris et al. (2020) - - -
NPID++ Misra and van der Maaten (2019) -
MoCo He et al. (2019a) -
PIRL Misra and van der Maaten (2019) -
PCL Li et al. (2020) - - -
BoWNet Gidaris et al. (2020) - -
SimCLR Chen et al. (2020a) - -
MoCov2 He et al. (2019a)
SwAV
Table 6: Transfer learning on downstream tasks. Comparison between features from ResNet-50 trained on ImageNet with SwAV or supervised learning. We also report numbers from other self-supervised methods ( for numbers from other methods run by us). We consider two settings. (1) Linear classification on top of frozen features. We report top-1 accuracy on Places205 and iNat18 datasets and mAP on VOC07. (2) Object detection with finetuned features on VOC07+12 trainval using Faster R-CNN Ren et al. (2015) and on COCO Lin et al. (2014) using DETR Carion et al. (2020). In this table, we report the most standard detection metrics for these datasets: on VOC07+12 and AP on COCO.

b.4 More detection metrics for object detection

In table 7 and table 8, we evaluate the features by finetuning a ResNet- on object detection with Faster R-CNN Ren et al. (2015) and DETR Carion et al. (2020) and report more detection metrics compared to table 6. We observe in table 7 and in table 8 that SwAV outperforms the ImageNet supervised pretrained model on all the detection evaluation metrics. Note that MoCov2 backbone performs particularly well on the object detection benchmark, and even outperform SwAV features for some detection metrics. However, as shown in table 6, this backbone is not competitive with the supervised features when evaluating on classification tasks without finetuning.

Method AP AP AP
Supervised 53.5 81.3 58.8
Random 28.1 52.5 26.2
NPID++ Misra and van der Maaten (2019) 52.3 79.1 56.9
PIRL Misra and van der Maaten (2019) 54.0 80.7 59.7
BoWNet Gidaris et al. (2020) 55.8 81.3 61.1
MoCov1 He et al. (2019a) 55.9 81.5 62.6
MoCov2 Chen et al. (2020b) 57.4 82.5 64.0
SwAV 56.1 82.6 62.7
Table 7: More detection metrics for object detection on VOC07+12 with finetuned features using Faster R-CNN Ren et al. (2015).
Method AP
ImageNet labels 40.8 61.2 42.9 20.1 44.5 60.3
MoCo-v2 42.0 62.7 44.4 20.8 45.6 60.9
SwAV 42.1 63.1 44.5 19.7 46.3 60.9
Table 8: More detection metrics for object detection on COCO with finetuned features using DETR Carion et al. (2020).

b.5 Low-Shot learning on ImageNet for SwAV pretrained on Instagram data

We now test whether SwAV pretrained on Instagram data can serve as a pretraining method for low-shot learning on ImageNet. We report in table 8 results when finetuning Instagram SwAV features with only few labels per ImageNet category. We observe that using pretrained features from Instagram improves considerably the performance compared to training from scratch.

# examples per class 13 128
top1 top5 top1 top5
No pretraining 25.4 48.4 56.4 80.4
SwAV IG-1B 38.2 67.1 64.7 87.2
Table 9: Low-shot learning on ImageNet. Top-1 and top-5 accuracies when training with or examples per category.

b.6 Image classification with KNN classifiers on ImageNet

Following previous work protocols Wu et al. (2018); Zhuang et al. (2019)

, we evaluate the quality of our unsupervised features with K-nearest neighbor (KNN) classifiers on ImageNet. We get features from the computed network outputs for center crops of training and test images. We report results with

and NN in table 10. We outperform the current state-of-the-art of this evaluation. Interestingly we also observe that using fewer NN actually boosts the performance of our model.

Method 20-NN 200-NN
NPID Wu et al. (2018) - 46.5
LA Zhuang et al. (2019) - 49.4
PCL Li et al. (2020) 54.5 -
SwAV 59.2 55.8
Table 10: KNN classifiers on ImageNet. We report top-1 accuracy with and nearest neighbors.

C Ablation Studies on Clustering

c.1 Number of prototypes

In table 11, we evaluate the influence of the number of prototypes used in SwAV. We train ResNet-50 with SwAV for epochs with crops (ablation study setting) and evaluate the performance by training a linear classifier on top of frozen final representations. We observe in table 11 that varying the number of prototypes by an order of magnitude (3k-100k) does not affect much the performance (at most on ImageNet). This suggests that the number of prototypes has little influence as long as there are “enough”. Throughout the paper, we train SwAV with prototypes. We find that using more prototypes increases the computational time both in the Sinkhorn algorithm and during back-propagation for an overall negligible gain in performance.

Number of prototypes 300 1000 3000 10000 30000 100000
Top-1 72.8 73.6 73.9 74.1 73.8 73.8
Table 11: Impact of number of prototypes. Top-1 ImageNet accuracy for linear models trained on frozen features.

c.2 Learning the prototypes

We investigate the impact of learning the prototypes compared to using fixed random prototypes. Assigning features to fixed random targets has been explored in NAT Bojanowski and Joulin (2017). However, unlike SwAV, NAT uses a target per instance in the dataset, the assignment is hard and performed with Hungarian algorithm. In table 12 (left), we observe that learning prototypes improves SwAV from to which shows the effect of adapting the prototypes to the dataset distribution.

Overall, these results suggest that our framework learns from a different signal from "offline" approaches that attribute a pseudo-label to each instance while considering the full dataset and then predict these labels (like DeepCluster Caron et al. (2018) for example). Indeed, the prototypes in SwAV are not strongly encouraged to be categorical and random fixed prototypes work almost as well. Rather, they help contrasting different image views without relying on pairwise comparison with many negatives samples. This might explain why the number of prototypes does not impact the performance significantly.

Prototypes Learned Fixed Top-1 73.9 73.1 Assignment Soft Hard Top-1 73.9 73.3
Table 12: Ablation studies on clustering. Top-1 ImageNet accuracy for linear models trained on frozen features. (left) Impact of learning the prototypes. (right) Hard versus soft assignments.
Figure 7: Hard versus soft assignments. We report the training loss for SwAV models trained with either soft or hard assignments. The models are trained during epochs with crops.

c.3 Hard versus soft assignments

In table 12 (right), we evaluate the impact of using hard assignment instead of the default soft assignment in SwAV. We train the models during epochs with crops (ablation study setting) and evaluate the performance by training a linear classifier on top of frozen final representations. We also report the training losses in fig. 7. We observe that using the hard assignments performs worse than using the soft assignments. An explanation is that the rounding needed to obtain discrete codes is a more aggressive optimization step than gradient updates. While it makes the model converge rapidly (see fig. 7), it leads to a worse solution.

Sinkhorn iterations 1 3 10 30
Top-1 fail 73.9 73.8 73.7
Table 13: Impact of the number of iterations in Sinkhorn algorithm. Top-1 ImageNet accuracy for linear models trained on frozen features.

c.4 Impact of the number of iterations in Sinkhorn algorithm

In table 13, we investigate the impact of the number of normalization steps performed during Sinkhorn-Knopp algorithm Cuturi (2013) on the performance of SwAV. We observe that using only iterations is enough for the model to converge. When performing less iterations, the loss fails to converge. We observe that using more iterations slightly alters the transfer performance of the model. We conjecture that it is for the same reason that rounding codes to discrete values deteriorate the quality of our model by converging too rapidly.

D Details on Clustering-Based methods: DeepCluster-v2 and SeLa-v2

In this section, we provide details on our improved implementation of clustering-based approaches DeepCluster-v2 and SeLa-v2 compared to their corresponding original publications Caron et al. (2018); Asano et al. (2020). These two methods follow the same pipeline: they alternate between pseudo-labels generation (“assignment phase”) and training the network with a classification loss supervised by these pseudo-labels (“training phase”).

d.1 Training phase

During the training phase, both methods minimize the multinomial logistic loss of the pseudo-labels classification problem:

(7)

The pseudo-labels are kept fixed during training and updated for the entire dataset once per epoch during the assignment phase.

Training phase in DeepCluster-v2.

In the original DeepCluster work, both the classification head and the convnet weights are trained to classify the images into their corresponding pseudo-label between two assignments. Intuitively, this classification head is optimized to represent prototypes for the different pseudo-classes. However, since there is no mapping between two consecutive assignments: the classification head learned during an assignment becomes irrelevant for the following one. Thus, this classification head needs to be re-set at each new assignment which considerably disrupts the convnet training. For this reason, we propose to simply use for classification head

the centroids given by k-means clustering (Eq. 

10). Overall, during training, DeepCluster-v2 optimizes the following problem with mini-batch SGD:

(8)
Training phase in SeLa-v2.

In SeLa work, the prototypes are learned with stochastic gradient descend during the training phase. Overall, during training, SeLa-v2 optimizes the following problem:

(9)

d.2 Assignment phase

The purpose of the assignment phase is to provide assignments for each instance of the dataset. For both methods, this implies having access to feature representations for the entire dataset. Both original works Caron et al. (2018); Asano et al. (2020) perform regularly a pass forward on the whole dataset to get these features. Using the original implementation, if assignments are updated at each epoch, then the assignment phase represents one third of the total training time. Therefore, in order to speed up training, we choose to use the features computed during the previous epoch instead of dedicating pass forwards to the assignments. This is similar to the memory bank introduced by Wu et alWu et al. (2018), without momentum.

Assignment phase in DeepCluster-v2.

DeepCluster-v2 uses spherical k-means to get pseudo-labels. In particular, pseudo-labels are obtained by minimizing the following problem:

(10)

where and the columns of are normalized. The original work DeepCluster uses tricks such as cluster re-assignments and balanced batch sampling to avoid trivial solutions but we found these unnecessary, and did not observe collapsing during our trainings. As noted by Asano et al., this is due to the fact that assignment and training are well separated phases.

Assignment phase in SeLa-v2.

Unlike DeepCluster, SeLa uses the same loss during training and assignment phases. In particular, we use Sinkhorn-Knopp algorithm to optimize the following assignment problem (see details and derivations in the original SeLa paper Asano et al. (2020)):

(11)
Implementation details

We use the same hyperparameters as SwAV to train SeLa-v2 and DeepCluster-v2: these are described in § A. Asano et alAsano et al. (2020) have shown that multi-clustering boosts performance of clustering-based approaches, and so we use sets of prototypes when training SeLa-v2 and DeepCluster-v2. Note that unlike online methods (like SwAV, SimCLR and MoCo), the clustering approaches SeLa-v2 and DeepCluster-v2 can be implemented with only a single crop per image per batch. The major limitation of SeLa-v2 and DeepCluster-v2 is that these methods are not online and therefore scaling them to very large scale dataset is not posible without major adjustments.