Unsupervised visual representation learning, or self-supervised learning, aims at obtaining features without using manual annotations and is rapidly closing the performance gap with supervised pretraining in computer visionChen et al. (2020a); He et al. (2019a); Misra and van der Maaten (2019). Many recent state-of-the-art methods build upon the instance discrimination task that considers each image of the dataset (or “instance”) and its transformations as a separate class Dosovitskiy et al. (2016). This task yields representations that are able to discriminate between different images, while achieving some invariance to image transformations. Recent self-supervised methods that use instance discrimination rely on a combination of two elements: (i) a contrastive loss Hadsell et al. (2006) and (ii) a set of image transformations. The contrastive loss removes the notion of instance classes by directly comparing image features while the image transformations define the invariances encoded in the features. Both elements are essential to the quality of the resulting networks Misra and van der Maaten (2019); Chen et al. (2020a) and our work improves upon both the objective function and the transformations.
The contrastive loss explicitly compares pairs of image representations to push away representations from different images while pulling together those from transformations, or views, of the same image. Since computing all the pairwise comparisons on a large dataset is not practical, most implementations approximate the loss by reducing the number of comparisons to random subsets of images during training Chen et al. (2020a); He et al. (2019a); Wu et al. (2018). An alternative to approximate the loss is to approximate the task—that is to relax the instance discrimination problem. For example, clustering-based methods discriminate between groups of images with similar features instead of individual images Caron et al. (2018). The objective in clustering is tractable, but it does not scale well with the dataset as it requires a pass over the entire dataset to form image “codes” (i.e., cluster assignments) that are used as targets during training. In this work, we use a different paradigm and propose to compute the codes online while enforcing consistency between codes obtained from views of the same image. Comparing cluster assignments allows to contrast different image views while not relying on explicit pairwise feature comparisons. Specifically, we propose a simple “swapped” prediction problem where we predict the code of a view from the representation of another view. We learn features by Swapping Assignments between multiple Views of the same image (SwAV). The features and the codes are learned online, allowing our method to scale to potentially unlimited amounts of data. In addition, SwAV works with small and large batch sizes and does not need a large memory bank Wu et al. (2018) or a momentum encoder He et al. (2019a).
Besides our online clustering-based method, we also propose an improvement to the image transformations. Most contrastive methods compare one pair of transformations per image, even though there is evidence that comparing more views during training improves the resulting model Misra and van der Maaten (2019). In this work, we propose multi-crop that uses smaller-sized images to increase the number of views while not increasing the memory or computational requirements during training. We also observe that mapping small parts of a scene to more global views significantly boosts the performance. Directly working with downsized images introduces a bias in the features Touvron et al. (2019), which can be avoided by using a mix of different sizes. Our strategy is simple, yet effective, and can be applied to many self-supervised methods with consistent gain in performance.
We validate our contributions by evaluating our method on several standard self-supervised benchmarks. In particular, on the ImageNet linear evaluation protocol, we reach top- accuracy with a standard ResNet-50, and with a wider model. We also show that our multi-crop strategy is general, and improves the performance of different self-supervised methods, namely SimCLR Chen et al. (2020a), DeepCluster Caron et al. (2018), and SeLa Asano et al. (2020), between and top-1 accuracy on ImageNet. Overall, we make the following contributions:
We propose a scalable online clustering loss that improves performance by on ImageNet and works in both large and small batch settings without a large memory bank or a momentum encoder.
We introduce the multi-crop strategy to increase the number of views of an image with no computational or memory overhead. We observe a consistent improvement of between and on ImageNet with this strategy on several self-supervised methods.
Combining both technical contributions into a single model, we improve the performance of self-supervised by on ImageNet with a standard ResNet and outperforms supervised ImageNet pretraining on multiple downstream tasks. This is the first method to do so without finetuning the features, i.e
., only with a linear classifier on top of frozen features.
2 Related Work
Instance and contrastive learning.
Instance-level classification considers each image in a dataset as its own class Bojanowski and Joulin (2017); Dosovitskiy et al. (2016); Wu et al. (2018). Dosovitskiy et al. Dosovitskiy et al. (2016) assign a class explicitly to each image and learn a linear classifier with as many classes as images in the dataset. As this approach becomes quickly intractable, Wu et al. Wu et al. (2018)
mitigate this issue by replacing the classifier with a memory bank that stores previously-computed representations. They rely on noise contrastive estimationGutmann and Hyvärinen (2010) to compare instances, which is a special form of contrastive learning Hjelm et al. (2019); Oord et al. (2018). He et al. He et al. (2019a) improve the training of contrastive methods by storing representations from a momentum encoder instead of the trained network. More recently, Chen et al. Chen et al. (2020a)
show that the memory bank can be entirely replaced with the elements from the same batch if the batch is large enough. In contrast to this line of works, we avoid comparing every pair of images by mapping the image features to a set of trainable prototype vectors.
Clustering for deep representation learning.
Our work is also related to clustering-based methods Asano et al. (2020); Bautista et al. (2016); Caron et al. (2018, 2019); Huang et al. (2019); Xie et al. (2016); Yang et al. (2016); Zhuang et al. (2019); Gidaris et al. (2020); Yan et al. (2020). Caron et al. Caron et al. (2018) show that -means assignments can be used as pseudo-labels to learn visual representations. This method scales to large uncurated dataset and can be used for pre-training of supervised networks Caron et al. (2019). However, their formulation is not principled and recently, Asano et al. Asano et al. (2020) show how to cast the pseudo-label assignment problem as an instance of the optimal transport problem. We consider a similar formulation to map representations to prototype vectors, but unlike Asano et al. (2020) we keep the soft assignment produced by the Sinkhorn-Knopp algorithm Cuturi (2013) instead of approximating it into a hard assignment. Besides, unlike Caron et al. Caron et al. (2018, 2019) and Asano et al. Asano et al. (2020), we obtain online assignments which allows our method to scale gracefully to any dataset size.
Handcrafted pretext tasks.
Many self-supervised methods manipulate the input data to extract a supervised signal in the form of a pretext task Doersch et al. (2015); Agrawal et al. (2015); Jenni and Favaro (2018); Kim et al. (2018); Larsson et al. (2016); Mahendran et al. (2018); Misra et al. (2016); Pathak et al. (2017, 2016); Wang and Gupta (2015); Wang et al. (2017); Zhang et al. (2017). We refer the reader to Jing et al. Jing and Tian (2019) for an exhaustive and detailed review of this literature. Of particular interest, Misra and van der Maaten Misra and van der Maaten (2019) propose to encode the jigsaw puzzle task Noroozi and Favaro (2016) as an invariant for contrastive learning. Jigsaw tiles are non-overlapping crops with small resolution that cover only part () of the entire image area. In contrast, our multi-crop strategy consists in simply sampling multiple random crops with two different sizes: a standard size and a smaller one.
|Contrastive instance learning||Swapping Assignments between Views (Ours)|
Our goal is to learn visual features in an online fashion without supervision. To that effect, we propose an online clustering-based self-supervised method. Typical clustering-based methods Asano et al. (2020); Caron et al. (2018) are offline in the sense that they alternate between a cluster assignment step where image features of the entire dataset are clustered, and a training step where the cluster assignments (or “codes”) are predicted for different image views. Unfortunately, these methods are not suitable for online learning as they require multiple passes over the dataset to compute the image features necessary for clustering. In this section, we describe an alternative where we enforce consistency between codes from different augmentations of the same image. This solution is inspired by contrastive instance learning Wu et al. (2018) as we do not consider the codes as a target, but only enforce consistent mapping between views of the same image. Our method can be interpreted as a way of contrasting between multiple image views by comparing their cluster assignments instead of their features.
More precisely, we compute a code from an augmented version of the image and predict this code from other augmented versions of the same image. Given two image features and from two different augmentations of the same image, we compute their codes and by matching these features to a set of prototypes
. We then setup a “swapped” prediction problem with the following loss function:
where the function measures the fit between features and a code , as detailed later. Intuitively, our method compares the features and using the intermediate codes and . If these two features capture the same information, it should be possible to predict the code from the other feature. A similar comparison appears in contrastive learning where features are compared directly Wu et al. (2018). In fig. 1, we illustrate the relation between contrastive learning and our method.
3.1 Online clustering
Each image is transformed into an augmented view by applying a transformation sampled from the set of image transformations. The augmented view is mapped to a vector representation by applying a non-linear mapping to . The feature is then projected to the unit sphere, i.e., . We then compute a code from this feature by mapping to a set of trainable prototypes vectors, . We denote by the matrix whose columns are the . We now describe how to compute these codes and update the prototypes online.
Swapped prediction problem.
The loss function in Eq. (1) has two terms that setup the “swapped” prediction problem of predicting the code from the feature , and from
. Each term represents the cross entropy loss between the code and the probability obtained by taking a softmax of the dot products ofand all prototypes in , i.e.,
where is a temperature parameter Wu et al. (2018). Taking this loss over all the images and pairs of data augmentations leads to the following loss function for the swapped prediction problem:
This loss function is jointly minimized with respect to the prototypes and the parameters of the image encoder used to produce the features .
Computing codes online.
In order to make our method online, we compute the codes using only the image features within a batch. We compute codes using the prototypes such that all the examples in a batch are equally partitioned by the prototypes. This equipartition constraint ensures that the codes for different images in a batch are distinct, thus preventing the trivial solution where every image has the same code. Given feature vectors , we are interested in mapping them to the prototypes . We denote this mapping or codes by , and optimize to maximize the similarity between the features and the prototypes , i.e.,
where is the entropy function, and is a parameter that controls the smoothness of the mapping. Asano et al. Asano et al. (2020) enforce an equal partition by constraining the matrix to belong to the transportation polytope. They work on the full dataset, and we propose to adapt their solution to work on minibatches by restricting the transportation polytope to the minibatch:
where denotes the vector of ones in dimension . These constraints enforce that on average each prototype is selected at least times in the batch.
Once a continuous solution to Prob. (3) is found, a discrete code can be obtained by using a rounding procedure Asano et al. (2020). Empirically, we found that discrete codes work well when computing codes in an offline manner on the full dataset as in Asano et al. Asano et al. (2020). However, in the online setting where we use only minibatches, using the discrete codes performs worse than using the continuous codes. An explanation is that the rounding needed to obtain discrete codes is a more aggressive optimization step than gradient updates. While it makes the model converge rapidly, it leads to a worse solution. We thus preserve the soft code instead of rounding it. These soft codes are the solution of Prob. (3) over the set and takes the form of a normalized exponential matrix Cuturi (2013):
where and are renormalization vectors in and respectively. The renormalization vectors are computed using a small number of matrix multiplications using the iterative Sinkhorn-Knopp algorithm Cuturi (2013). In practice, we observe that using only iterations is fast and sufficient to obtain good performance. Indeed, this algorithm can be efficiently implemented on GPU, and the alignment of K features to K codes takes ms in our experiments, see § 4.
Working with small batches.
When the number of batch features is too small compared to the number of prototypes , it is impossible to equally partition the batch into the prototypes. Therefore, when working with small batches, we use features from the previous batches to augment the size of in Prob. (3). Then, we only use the codes of the batch features in our training loss. In practice, we store around K features, i.e., in the same range as the number of code vectors. This means that we only keep features from the last batches with a batch size of , while contrastive methods typically need to store the last K instances obtained from the last batches He et al. (2019a).
3.2 Multi-crop: Augmenting views with smaller images
As noted in prior works Chen et al. (2020a); Misra and van der Maaten (2019), comparing random crops of an image plays a central role by capturing information in terms of relations between parts of a scene or an object. Unfortunately, increasing the number of crops or “views” quadratically increases the memory and compute requirements. We propose a multi-crop strategy where we use two standard resolution crops and sample additional low resolution crops that cover only small parts of the image. Using low resolution images ensures only a small increase in the compute cost. Specifically, we generalize the loss of Eq (1):
Note that we compute codes using only the full resolution crops. Indeed, computing codes for all crops increases the computational time and we observe in practice that it also alters the transfer performance of the resulting network. An explanation is that using only partial information (small crops cover only small area of images) degrades the assignment quality. Figure 3 shows that multi-crop improves the performance of several self-supervised methods and is a promising augmentation strategy.
4 Main Results
We analyze the features learned by SwAV by transfer learning on multiple datasets. We implement in SwAV the improvements used in SimCLR,i.e., LARS You et al. (2017), cosine learning rate Loshchilov and Hutter (2016); Misra and van der Maaten (2019) and the MLP projection head Chen et al. (2020a)
. We provide the full details and hyperparameters for pretraining and transfer learning in the Appendix.
4.1 Evaluating the unsupervised features on ImageNet
We evaluate the features of a ResNet-50 He et al. (2016)
trained with SwAV on ImageNet by two experiments: linear classification on frozen features and semi-supervised learning by finetuning with few labels. When using frozen features (fig. 2 left), SwAV outperforms the state of the art by top-1 accuracy and is only below the performance of a fully supervised model. Note that we train SwAV during epochs with large batches (). We refer to fig. 3 for results with shorter trainings and to table 3 for experiments with small batches. On semi-supervised learning (table 1), SwAV outperforms other self-supervised methods and is on par with state-of-the-art semi-supervised models Sohn et al. (2020), despite the fact that SwAV is not specifically designed for semi-supervised learning.
|1% labels||10% labels|
|Methods using label-propagation||UDA Xie et al. (2020)||-||-||68.8*||88.5*|
|FixMatch Sohn et al. (2020)||-||-||71.5*||89.1*|
|Methods using self-supervision only||PIRL Misra and van der Maaten (2019)||30.7||57.2||60.4||83.8|
|PCL Li et al. (2020)||-||75.6||-||86.2|
|SimCLR Chen et al. (2020a)||48.3||75.5||65.6||87.8|
Variants of ResNet-50. Figure 2 (right) shows the performance of multiple variants of ResNet-50 with different widths Kolesnikov et al. (2019). The performance of our model increases with the width of the model, and follows a similar trend to the one obtained with supervised learning. When compared with concurrent work like SimCLR, we see that SwAV reduces the difference with supervised models even further. Indeed, for large architectures, our method shrinks the gap with supervised training to .
|Linear Classification||Object Detection|
|Places205||VOC07||iNat18||VOC07+12 (Faster R-CNN)||COCO (DETR)|
4.2 Transferring unsupervised features to downstream tasks
We test the generalization of ResNet-50 features trained with SwAV on ImageNet (without labels) by transferring to several downstream vision tasks. In table 2, we compare the performance of SwAV features with ImageNet supervised pretraining. First, we report the linear classification performance on the Places205 Zhou et al. (2014), VOC07 Everingham et al. (2010), and iNaturalist2018 Van Horn et al. (2018) datasets. Our method outperforms supervised features on all three datasets. Note that SwAV is the first self-supervised method to surpass ImageNet supervised features on these datasets. Second, we report network finetuning on object detection on VOC07+12 using Faster R-CNN Ren et al. (2015) and on COCO Lin et al. (2014) with DETR Carion et al. (2020). DETR is a recent object detection framework that reaches competitive performance with Faster R-CNN while being conceptually simpler and trainable end-to-end. We use DETR because, unlike Faster R-CNN He et al. (2019b), using a pretrained backbone in this framework is crucial to obtain good results compared to training from scratch Carion et al. (2020). In table 2, we show that SwAV outperforms the supervised pretrained model on both VOC07+12 and COCO datasets. Note that this is line with previous works that also show that self-supervision can outperform supervised pretraining on object detection Misra and van der Maaten (2019); He et al. (2019a); Gidaris et al. (2020)
. We report more detection evaluation metrics and results from other self-supervised methods in the Appendix. Overall, our SwAV ResNet-50 model surpasses supervised ImageNet pretraining on all the considered transfer tasks and datasets. We will release this model so other researchers might also benefit by replacing the ImageNet supervised network with our model.
4.3 Training with small batches
We train SwAV with small batches of images on GPUs and compare with MoCov2 and SimCLR trained in the same setup. In table 3, we see that SwAV maintains state-of-the-art performance even when trained in the small batch setting. Note that SwAV only stores a queue of features. In comparison, to obtain good performance, MoCov2 needs to store features while keeping an additional momentum encoder network. When SwAV is trained using crops, SwAV has a running time higher than SimCLR with crops and is around slower than MoCov2 due to the additional back-propagation Chen et al. (2020b). However, as shown in table 3, SwAV learns much faster and reaches higher performance in fewer epochs: after epochs while MoCov2 needs epochs to achieve . Increasing the resolution and the number of epochs, SwAV reaches with a small number of stored features and no momentum encoder.
|Method||Mom. Encoder||Stored Features||multi-crop||epoch||batch||Top-1|
5 Ablation Study
Applying the multi-crop strategy to different methods.
In fig. 3 (left), we report the impact of applying our multi-crop strategy on the performance of a selection of other methods. Besides SwAV, we consider supervised learning, SimCLR and two clustering-based models, DeepCluster-v2 and SeLa-v2. The last two are obtained by applying the improvements of SimCLR to DeepCluster Caron et al. (2018) and SeLa Asano et al. (2020) (see details in the Appendix). We see that the multi-crop strategy consistently improves the performance for all the considered methods by a significant margin of top-1 accuracy. Interestingly, multi-crop seems to benefit more clustering-based methods than contrastive methods. We note that multi-crop does not improve the supervised model.
Figure 3 (left) also allows us to compare clustering-based and contrastive instance methods. First, we observe that SwAV and DeepCluster-v2 outperform SimCLR by both with and without multi-crop. This suggests the learning potential of clustering-based methods over instance classification. Finally, we see that SwAV performs on par with offline clustering-based approaches, that use the entire dataset to learn prototypes and codes.
Impact of longer training.
In fig. 3 (right), we show the impact of the number of training epochs on performance for SwAV with multi-crop. We train separate models for , , and epochs and report the top-1 accuracy on ImageNet using the linear classification evaluation. We train each ResNet-50 on V100 16GB GPUs and a batch size of . While SwAV benefits from longer training, it already achieves strong performance after epochs, i.e., in 6h15.
Unsupervised pretraining on a large uncurated dataset.
We test if SwAV can serve as a pretraining method for supervised learning and also check its robustness on uncurated pretraining data. We pretrain SwAV on an uncurated dataset of 1 billion random public non-EU images from Instagram. In fig. 4 (left), we measure the performance of ResNet-50 models when transferring to ImageNet with frozen or finetuned features. We report the results from He et al. He et al. (2019a) but note that their setting is different. They use a curated set of Instagram images, filtered by hashtags similar to ImageNet labels Mahajan et al. (2018). We compare SwAV with a randomly initialized network and with a network pretrained on the same data using SimCLR. We observe that SwAV maintains a similar gain of over SimCLR as when pretrained on ImageNet (fig. 2), showing that our improvements do not depend on the data distribution. We also see that pretraining with SwAV on random images significantly improves over training from scratch on ImageNet () Caron et al. (2019); He et al. (2019a). In fig. 4 (right), we explore the limits of pretraining as we increase the model capacity. We consider the variants of the ResNeXt architecture Xie et al. (2017) as in Mahajan et al. Mahajan et al. (2018). We compare SwAV with supervised models trained from scratch on ImageNet. For all models, SwAV outperforms training from scratch by a significant margin showing that it can take advantage of the increased model capacity. For reference, we also include the results from Mahajan et al. Mahajan et al. (2018) obtained with a weakly-supervised model pretrained by predicting hashtags filtered to be similar to ImageNet classes. Interestingly, SwAV performance is strong when compared to this topline despite not using any form of supervision or filtering of the data.
Self-supervised learning is rapidly progressing compared to supervised learning, even surpassing it on transfer learning, even though the current experimental settings are designed for supervised learning. In particular, architectures have been designed for supervised tasks, and it is not clear if the same models would emerge from exploring architectures with no supervision. Several recent works have shown that exploring architectures with search Liu et al. (2020) or pruning Caron et al. (2020) is possible without supervision, and we plan to evaluate the ability of our method to guide model explorations.
We thank Nicolas Carion, Kaiming He, Herve Jegou, Benjamin Lefaudeux, Thomas Lucas, Francisco Massa, Sergey Zagoruyko, and the rest of Thoth and FAIR teams for their help and fruitful discussions. Julien Mairal was funded by the ERC grant number 714381 (SOLARIS project) and by ANR 3IA MIAI@Grenoble Alpes (ANR-19-P3IA-0003).
- Learning to see by moving. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
- Self-labelling via simultaneous clustering and representation learning. International Conference on Learning Representations (ICLR). Cited by: §1, §2, Figure 2, §3.1, §3.1, §3, §5, §D.2, §D.2, §D.2, §D.
- Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: Table 5.
- Cliquecnn: deep unsupervised exemplar learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2.
Unsupervised learning by predicting noise.
Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2, §C.2.
- End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872. Cited by: §4.2, Table 2, §A.5, §B.3, §B.4, Table 6, Table 8.
- Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1, §1, §2, §3, §5, §C.2, §D.2, §D.
- Unsupervised pre-training of image features on non-curated data. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2, §5.
Pruning convolutional neural networks with self-supervision. arXiv preprint arXiv:2001.03554. Cited by: §6.
- A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §1, §1, §1, §2, Figure 2, §3.2, Table 1, §4, §A.1, §A.2, Table 5, Table 6.
- Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297. Cited by: Figure 2, §4.3, §A.1, §A.5, Table 7.
- RandAugment: practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719. Cited by: Table 1.
- Sinkhorn distances: lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §2, §3.1, §C.4.
- Unsupervised visual representation learning by context prediction. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
- Large scale adversarial representation learning. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: Figure 2, Table 5.
- Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence 38 (9), pp. 1734–1747. Cited by: §1, §2.
- The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §4.2.
- LIBLINEAR: a library for large linear classification. Journal of machine learning research. Cited by: §A.5.
- Learning representations by predicting bags of visual words. arXiv preprint arXiv:2002.12247. Cited by: §2, §4.2, Table 6, Table 7.
- Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), Cited by: Table 5.
- Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §A.1.
Noise-contrastive estimation: a new estimation principle for unnormalized statistical models.
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Cited by: §2.
Dimensionality reduction by learning an invariant mapping.
Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
- Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Cited by: §1, §1, §2, Figure 2, §3.1, §4.2, §5, §A.5, Table 5, Table 6, Table 7.
- Rethinking imagenet pre-training. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §4.2, §A.5.
- Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
- Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272. Cited by: Figure 2, Table 5.
- Learning deep representations by mutual information estimation and maximization. International Conference on Learning Representations (ICLR). Cited by: §2.
Unsupervised deep learning by neighbourhood discovery. In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
- Self-supervised feature learning by learning to spot artifacts. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- Self-supervised visual feature learning with deep neural networks: a survey. arXiv preprint arXiv:1902.06162. Cited by: §2.
- Learning image representations by completing damaged jigsaw puzzles. In Winter Conference on Applications of Computer Vision (WACV), Cited by: §2.
- Revisiting self-supervised visual representation learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.1.
- Learning representations for automatic colorization. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.
- Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966. Cited by: Figure 2, Table 1, §A.1, §A.4, Table 10, Table 6.
- Microsoft coco: common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §4.2, Table 2, §A.5, Table 6.
- Are labels necessary for neural architecture search?. arXiv preprint arXiv:2003.12056. Cited by: §6.
Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §4, §A.1, §A.6.
- Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: Figure 4, §5.
- Cross pixel optical flow similarity for self-supervised learning. arXiv preprint arXiv:1807.05636. Cited by: §2.
- Mixed precision training. arXiv preprint arXiv:1710.03740. Cited by: §A.1.
- Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991. Cited by: §1, §1, §2, Figure 2, §3.2, §4.2, Table 1, §4, §A.1, §A.5, Table 6, Table 7.
- Shuffle and learn: unsupervised learning using temporal order verification. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2.
- Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §2, Figure 2.
- Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §2.
- Learning features by watching objects move. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- Context encoders: feature learning by inpainting. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §4.2, Table 2, §A.5, §B.3, §B.4, Table 6, Table 7.
- Fixmatch: simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685. Cited by: §4.1, Table 1.
- Contrastive multiview coding. arXiv preprint arXiv:1906.05849. Cited by: Table 5.
- Fixing the train-test resolution discrepancy. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1.
- The inaturalist species classification and detection dataset. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2.
- Unsupervised learning of visual representations using videos. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
- Transitive invariance for self-supervised visual representation learning. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2.
- Detectron2. Note: https://github.com/facebookresearch/detectron2 Cited by: §A.5.
- Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, Figure 2, §3.1, §3, §3, §B.6, §D.2, Table 10.
Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning (ICML), Cited by: §2.
- Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. Cited by: Table 1.
- Aggregated residual transformations for deep neural networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §5.
- ClusterFit: improving generalization of visual representations. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- Joint unsupervised learning of deep representations and image clusters. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888. Cited by: §4, §A.1, §A.6.
- Colorful image colorization. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: Figure 2.
Split-brain autoencoders: unsupervised learning by cross-channel prediction. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- . In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §4.2.
- Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §2, Figure 2, §B.6, Table 10.
A Implementation Details
In this section, we provide the details and hyperparameters for SwAV pretraining and transfer learning. We are planning to opensource our code in order to facilitate the reproduction of our work.
a.1 Implementation details of SwAV training
First, we provide a pseudo-code for SwAV training loop using two crops in Pytorch style:
# C: prototypes (DxK)
# model: convnet + projection head
# temp: temperature
for x in loader: # load a batch x with B samples
x_t = t(x) # t is a random augmentation
x_s = s(x) # s is a another random augmentation
z = model(cat(x_t, x_s)) # embeddings: 2BxD
scores = mm(z, C) # prototype scores: 2BxK
scores_t = scores[:B]
scores_s = scores[B:]
# compute assignments
q_t = sinkhorn(scores_t)
q_s = sinkhorn(scores_s)
# convert scores to probabilities
p_t = Softmax(scores_t / temp)
p_s = Softmax(scores_s / temp)
# swap prediction problem
loss = - 0.5 * mean(q_t * log(p_s) + q_s * log(p_t))
# SGD update: network and prototypes
# normalize prototypes
C = normalize(C, dim=0, p=2)
def sinkhorn(scores, eps=0.05, niters=3):
Q = exp(scores / eps).T
Q /= sum(Q)
K, B = Q.shape
u, r, c = zeros(K), ones(K) / K, ones(B) / B
for _ in range(niters):
u = sum(Q, dim=1)
Q *= (r / u).unsqueeze(1)
Q *= (c / sum(Q, dim=0)).unsqueeze(0)
return (Q / sum(Q, dim=0, keepdim=True)).T
Most of our training hyperparameters are directly taken from SimCLR work Chen et al. (2020a). We train SwAV with stochastic gradient descent using large batches of different instances. We distribute the batches over V Gb GPUs, resulting in each GPU treating instances. The temperature parameter is set to and the Sinkhorn regularization parameter is set to for all runs. We use a weight decay of , LARS optimizer You et al. (2017) and a learning rate of which is linearly ramped up during the first epochs. After warmup, we use the cosine learning rate decay Loshchilov and Hutter (2016); Misra and van der Maaten (2019) with a final value of
. To help the very beginning of the optimization, we freeze the prototypes during the first epoch of training. We synchronize batch-normalization layers across GPUs using the optimized implementation with kernels through CUDA/C-v2 extension fromapex000github.com/NVIDIA/apex. We also use apex library for training with mixed precision Micikevicius et al. (2017). Overall, thanks to these training optimizations (mixed precision, kernel batch-normalization and use of large batches Goyal et al. (2017)), epochs of training for our best SwAV model take approximately hours (see table 4). Similarly to previous works Chen et al. (2020a, b); Li et al. (2020), we use a projection head on top of the convnet features that consists in a
-layer multi-layer perceptron (MLP) that projects the convnet output to a-D space.
Note that SwAV is more suitable for a multi-node distributed implementation compared to contrastive approaches SimCLR or MoCo. The latter methods require sharing the feature matrix across all GPUs at every batch which might become a bottleneck when distributing across many GPUs. On the contrary, SwAV requires sharing only matrix normalization statistics (sum of rows and columns) during the Sinkhorn algorithm.
a.2 Data augmentation used in SwAV
We obtain two different views from an image by performing crops of random sizes and aspect ratios. Specifically we use the RandomResizedCrop method from torchvision.transforms module of PyTorch with the following scaling parameters: s=(0.14, 1). Note that we sample crops in a narrower range of scale compared to the default RandomResizedCrop parameters. Then, we resize both full resolution views to pixels, unless specified otherwise (we use resolutions in some of our experiments). Besides, we obtain additional views by cropping small parts in the image. To do so, we use the following RandomResizedCrop parameters: s=(0.05, 0.14). We resize the resulting crops to resolution. Note that we always deal with resolutions that are divisible by to avoid roundings in the ResNet- pooling layers. Finally, we apply random horizontal flips, color distortion and Gaussian blur to each resulting crop, exactly following the SimCLR implementation Chen et al. (2020a). An illustration of our multi-crop augmentation strategy can be viewed in fig. 5.
a.3 Implementation details of linear classification on ImageNet with ResNet-50
We obtain top-1 accuracy on ImageNet by training a linear classifier on top of frozen final representations (-D) of a ResNet- trained with SwAV. This linear layer is trained during epochs, with a learning rate of and a weight decay of . We use cosine learning rate decay and a batch size of . We use standard data augmentations, i.e., cropping of random sizes and aspect ratios (default parameters of RandomResizedCrop) and random horizontal flips.
a.4 Implementation details of semi-supervised learning (finetuning with 1% or 10% labels)
We finetune with either 1% or 10% of ImageNet labeled images a ResNet- pretrained with SwAV. We use the 1% and 10% splits specified in the official code release of SimCLR. We mostly follow hyperparameters from PCL Li et al. (2020): we train during epochs with a batch size of , we use distinct learning rates for the convnet weights and the final linear layer, and we decay the learning rates by a factor at epochs and . We do not apply any weight decay during finetuning. For 1% finetuning, we use a learning rate of for the trunk and for the final layer. For 10% finetuning, we use a learning rate of for the trunk and for the final layer.
a.5 Implementation details of transfer learning on downstream tasks
Linear classifiers. We mostly follow PIRL Misra and van der Maaten (2019) for training linear models on top of representations given by a ResNet-50 pretrained with SwAV. On VOC07, all images are resized to 256 pixels along the shorter side, before taking a center crop. Then, we train a linear SVM with LIBLINEAR Fan et al. (2008) on top of corresponding global average pooled final representations (-D). For linear evaluation on other datasets (Places205 and iNat18), we train linear models with stochastic gradient descent using a batch size of , a learning rate of reduced by a factor of three times (equally spaced intervals), weight decay of and momentum of . On Places205, we train the linear models for epochs and on iNat18 for epochs. We report the top-1 accuracy computed using the center crop on the validation set.
Object Detection on VOC07+12. We use a Faster R-CNN Ren et al. (2015) model as implemented in Detectron2 Wu et al. (2019) and follow the finetuning protocol from He et al. He et al. (2019a) making the following changes to the hyperparameters – our initial learning rate is which is warmed with a slope (WARMUP_FACTOR flag in Detectron2) of for iterations. Other training hyperparamters are kept exactly the same as in He et al. He et al. (2019a), i.e., batchsize of across GPUs, training for K iterations on VOC07+12 trainval with the learning rate reduced by a factor of after K and K iterations, using SyncBatchNorm to finetune BatchNorm parameters, and adding an extra BatchNorm layer after the res5 layer (Res5ROIHeadsExtraNorm head in Detectron2). We report results on VOC07 test set averaged over independant runs.
Object Detection on COCO. We test the generalization of our ResNet-50 features trained on ImageNet with SwAV by transferring them to object detection on COCO dataset Lin et al. (2014) with DETR framework Carion et al. (2020). DETR is a recent object detection framework that relies on a transformer encoder-decoder architecture. It reaches competitive performance with Faster R-CNN while being conceptually simpler and trainable end-to-end. Interestingly, unlike other frameworks He et al. (2019b), current results with DETR have shown that using a pretrained backbone is crucial to obtain good results compared to training from scratch. Therefore, we investigate if we can boost DETR performance by using features pretrained on ImageNet with SwAV instead of standard supervised features. We also evaluate features from MoCov2 Chen et al. (2020b) pretraining. We train DETR during epochs with AdamW, we use a learning rate of for the transformer and apply a weight decay of . We select for each method the best learning rate for the backbone among the following three values: , and . We decay the learning rates by a factor after epochs.
a.6 Implementation details of training with small batches of 256 images
We start using a queue composed of the feature representations from previous batches after epochs of training. Indeed, we find that using the queue before epochs disturbs the convergence of the model since the network is changing a lot from an iteration to another during the first epochs. We simulate large batches of size by storing the last batches, that is vectors of dimension . We use a weight decay of , LARS optimizer You et al. (2017) and a learning rate of . We use the cosine learning rate decay Loshchilov and Hutter (2016) with a final value of .
a.7 Implementation details of ablation studies
In our ablation studies (results in Table of the main paper for example), we choose to follow closely the data augmentation used in concurrent work SimCLR. This allows a fair comparison and importantly, isolates the effect of our contributions. In practice, it means that we use the default parameters of the random crop method (RandomResizedCrop), s=(0.08, 1) instead of s=(0.14, 1), when sampling the two large resolution views.
B Additional Results
b.1 Running times
In table 4, we report compute and GPU memory requirements based on our implementation for different settings. As described in § A.1, we train each method on V100 16GB GPUs, with a batch size of , using mixed precision and apex optimized version of synchronized batch-normalization layers. We report results with ResNet- for all methods. In fig. 6, we report SwAV performance for different training lengths measured in hours based on our implementation. We observe that after only hours of training, SwAV outperforms SimCLR trained for epochs ( hours based on our implementation) by a large margin. If we train SwAV for longer, we see that the performance gap between the two methods increases even more.
|Method||multi-crop||time / 100 epochs||peak memory / GPU|
b.2 Larger architectures
In table 5, we show results when training SwAV on large architectures. We observe that SwAV benefits from training on large architectures and plan to explore in this direction to furthermore boost self-supervised methods.
|Rotation Gidaris et al. (2018)||RevNet50-4w||86||55.4|
|BigBiGAN Donahue and Simonyan (2019)||RevNet50-4w||86||61.3|
|AMDIM Bachman et al. (2019)||Custom-RN||626||68.1|
|CMC Tian et al. (2019)||R50-w2||188||68.4|
|MoCo He et al. (2019a)||R50-w4||375||68.6|
|CPC v2 Hénaff et al. (2019)||R161||305||71.5|
|SimCLR Chen et al. (2020a)||R50-w4||375||76.8|
b.3 Transferring unsupervised features to downstream tasks
In table 6, we expand results from the main paper by providing numbers from previously and concurrently published self-supervised methods. In the left panel of table 6, we show performance after training a linear classifier on top of frozen representations on different datasets while on the right panel we evaluate the features by finetuning a ResNet- on object detection with Faster R-CNN Ren et al. (2015) and DETR Carion et al. (2020). Overall, we observe on table 6 that SwAV is the first self-supervised method to outperform ImageNet supervised backbone on all the considered transfer tasks and datasets. Other self-supervised learners are capable of surpassing the supervised counterpart but only for one type of transfer (object detection with finetuning for MoCo/PIRL for example). We will release this model so other researchers might also benefit by replacing the ImageNet supervised network with our model.
|Linear Classification||Object Detection|
|Places205||VOC07||iNat18||VOC07+12 (Faster R-CNN)||COCO (DETR)|
|RotNet Gidaris et al. (2020)||-||-||-|
|NPID++ Misra and van der Maaten (2019)||-|
|MoCo He et al. (2019a)||-|
|PIRL Misra and van der Maaten (2019)||-|
|PCL Li et al. (2020)||-||-||-|
|BoWNet Gidaris et al. (2020)||-||-|
|SimCLR Chen et al. (2020a)||-||-|
|MoCov2 He et al. (2019a)|
b.4 More detection metrics for object detection
In table 7 and table 8, we evaluate the features by finetuning a ResNet- on object detection with Faster R-CNN Ren et al. (2015) and DETR Carion et al. (2020) and report more detection metrics compared to table 6. We observe in table 7 and in table 8 that SwAV outperforms the ImageNet supervised pretrained model on all the detection evaluation metrics. Note that MoCov2 backbone performs particularly well on the object detection benchmark, and even outperform SwAV features for some detection metrics. However, as shown in table 6, this backbone is not competitive with the supervised features when evaluating on classification tasks without finetuning.
|NPID++ Misra and van der Maaten (2019)||52.3||79.1||56.9|
|PIRL Misra and van der Maaten (2019)||54.0||80.7||59.7|
|BoWNet Gidaris et al. (2020)||55.8||81.3||61.1|
|MoCov1 He et al. (2019a)||55.9||81.5||62.6|
|MoCov2 Chen et al. (2020b)||57.4||82.5||64.0|
b.5 Low-Shot learning on ImageNet for SwAV pretrained on Instagram data
We now test whether SwAV pretrained on Instagram data can serve as a pretraining method for low-shot learning on ImageNet. We report in table 8 results when finetuning Instagram SwAV features with only few labels per ImageNet category. We observe that using pretrained features from Instagram improves considerably the performance compared to training from scratch.
|# examples per class||13||128|
b.6 Image classification with KNN classifiers on ImageNet
, we evaluate the quality of our unsupervised features with K-nearest neighbor (KNN) classifiers on ImageNet. We get features from the computed network outputs for center crops of training and test images. We report results withand NN in table 10. We outperform the current state-of-the-art of this evaluation. Interestingly we also observe that using fewer NN actually boosts the performance of our model.
C Ablation Studies on Clustering
c.1 Number of prototypes
In table 11, we evaluate the influence of the number of prototypes used in SwAV. We train ResNet-50 with SwAV for epochs with crops (ablation study setting) and evaluate the performance by training a linear classifier on top of frozen final representations. We observe in table 11 that varying the number of prototypes by an order of magnitude (3k-100k) does not affect much the performance (at most on ImageNet). This suggests that the number of prototypes has little influence as long as there are “enough”. Throughout the paper, we train SwAV with prototypes. We find that using more prototypes increases the computational time both in the Sinkhorn algorithm and during back-propagation for an overall negligible gain in performance.
|Number of prototypes||300||1000||3000||10000||30000||100000|
c.2 Learning the prototypes
We investigate the impact of learning the prototypes compared to using fixed random prototypes. Assigning features to fixed random targets has been explored in NAT Bojanowski and Joulin (2017). However, unlike SwAV, NAT uses a target per instance in the dataset, the assignment is hard and performed with Hungarian algorithm. In table 12 (left), we observe that learning prototypes improves SwAV from to which shows the effect of adapting the prototypes to the dataset distribution.
Overall, these results suggest that our framework learns from a different signal from "offline" approaches that attribute a pseudo-label to each instance while considering the full dataset and then predict these labels (like DeepCluster Caron et al. (2018) for example). Indeed, the prototypes in SwAV are not strongly encouraged to be categorical and random fixed prototypes work almost as well. Rather, they help contrasting different image views without relying on pairwise comparison with many negatives samples. This might explain why the number of prototypes does not impact the performance significantly.
c.3 Hard versus soft assignments
In table 12 (right), we evaluate the impact of using hard assignment instead of the default soft assignment in SwAV. We train the models during epochs with crops (ablation study setting) and evaluate the performance by training a linear classifier on top of frozen final representations. We also report the training losses in fig. 7. We observe that using the hard assignments performs worse than using the soft assignments. An explanation is that the rounding needed to obtain discrete codes is a more aggressive optimization step than gradient updates. While it makes the model converge rapidly (see fig. 7), it leads to a worse solution.
c.4 Impact of the number of iterations in Sinkhorn algorithm
In table 13, we investigate the impact of the number of normalization steps performed during Sinkhorn-Knopp algorithm Cuturi (2013) on the performance of SwAV. We observe that using only iterations is enough for the model to converge. When performing less iterations, the loss fails to converge. We observe that using more iterations slightly alters the transfer performance of the model. We conjecture that it is for the same reason that rounding codes to discrete values deteriorate the quality of our model by converging too rapidly.
D Details on Clustering-Based methods: DeepCluster-v2 and SeLa-v2
In this section, we provide details on our improved implementation of clustering-based approaches DeepCluster-v2 and SeLa-v2 compared to their corresponding original publications Caron et al. (2018); Asano et al. (2020). These two methods follow the same pipeline: they alternate between pseudo-labels generation (“assignment phase”) and training the network with a classification loss supervised by these pseudo-labels (“training phase”).
d.1 Training phase
During the training phase, both methods minimize the multinomial logistic loss of the pseudo-labels classification problem:
The pseudo-labels are kept fixed during training and updated for the entire dataset once per epoch during the assignment phase.
Training phase in DeepCluster-v2.
In the original DeepCluster work, both the classification head and the convnet weights are trained to classify the images into their corresponding pseudo-label between two assignments. Intuitively, this classification head is optimized to represent prototypes for the different pseudo-classes. However, since there is no mapping between two consecutive assignments: the classification head learned during an assignment becomes irrelevant for the following one. Thus, this classification head needs to be re-set at each new assignment which considerably disrupts the convnet training. For this reason, we propose to simply use for classification head
the centroids given by k-means clustering (Eq.10). Overall, during training, DeepCluster-v2 optimizes the following problem with mini-batch SGD:
Training phase in SeLa-v2.
In SeLa work, the prototypes are learned with stochastic gradient descend during the training phase. Overall, during training, SeLa-v2 optimizes the following problem:
d.2 Assignment phase
The purpose of the assignment phase is to provide assignments for each instance of the dataset. For both methods, this implies having access to feature representations for the entire dataset. Both original works Caron et al. (2018); Asano et al. (2020) perform regularly a pass forward on the whole dataset to get these features. Using the original implementation, if assignments are updated at each epoch, then the assignment phase represents one third of the total training time. Therefore, in order to speed up training, we choose to use the features computed during the previous epoch instead of dedicating pass forwards to the assignments. This is similar to the memory bank introduced by Wu et al. Wu et al. (2018), without momentum.
Assignment phase in DeepCluster-v2.
DeepCluster-v2 uses spherical k-means to get pseudo-labels. In particular, pseudo-labels are obtained by minimizing the following problem:
where and the columns of are normalized. The original work DeepCluster uses tricks such as cluster re-assignments and balanced batch sampling to avoid trivial solutions but we found these unnecessary, and did not observe collapsing during our trainings. As noted by Asano et al., this is due to the fact that assignment and training are well separated phases.
Assignment phase in SeLa-v2.
Unlike DeepCluster, SeLa uses the same loss during training and assignment phases. In particular, we use Sinkhorn-Knopp algorithm to optimize the following assignment problem (see details and derivations in the original SeLa paper Asano et al. (2020)):
We use the same hyperparameters as SwAV to train SeLa-v2 and DeepCluster-v2: these are described in § A. Asano et al. Asano et al. (2020) have shown that multi-clustering boosts performance of clustering-based approaches, and so we use sets of prototypes when training SeLa-v2 and DeepCluster-v2. Note that unlike online methods (like SwAV, SimCLR and MoCo), the clustering approaches SeLa-v2 and DeepCluster-v2 can be implemented with only a single crop per image per batch. The major limitation of SeLa-v2 and DeepCluster-v2 is that these methods are not online and therefore scaling them to very large scale dataset is not posible without major adjustments.