Deep Clustering for Unsupervised Learning of Visual Features

07/15/2018 ∙ by Mathilde Caron, et al. ∙ 4

Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 23

page 25

page 26

page 28

page 29

page 30

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Pre-trained convolutional neural networks, or convnets, have become the building blocks in most computer vision applications [1, 2, 3, 4]. They produce excellent general-purpose features that can be used to improve the generalization of models learned on a limited amount of data [5]. The existence of ImageNet [6], a large fully-supervised dataset, has been fueling advances in pre-training of convnets. However, Stock and Cisse [7]

have recently presented empirical evidence that the performance of state-of-the-art classifiers on ImageNet is largely underestimated, and little error is left unresolved. This explains in part why the performance has been saturating despite the numerous novel architectures proposed in recent years 

[2, 8, 9]. As a matter of fact, ImageNet is relatively small by today’s standards; it “only” contains a million images that cover the specific domain of object classification. A natural way to move forward is to build a bigger and more diverse dataset, potentially consisting of billions of images. This, in turn, would require a tremendous amount of manual annotations, despite the expert knowledge in crowdsourcing accumulated by the community over the years [10]. Replacing labels by raw metadata leads to biases in the visual representations with unpredictable consequences [11]. This calls for methods that can be trained on internet-scale datasets with no supervision.

Unsupervised learning has been widely studied in the Machine Learning community 

[12]

, and algorithms for clustering, dimensionality reduction or density estimation are regularly used in computer vision applications 

[13, 14, 15]. For example, the “bag of features” model uses clustering on handcrafted local descriptors to produce good image-level features [16]. A key reason for their success is that they can be applied on any specific domain or dataset, like satellite or medical images, or on images captured with a new modality, like depth, where annotations are not always available in quantity. Several works have shown that it was possible to adapt unsupervised methods based on density estimation or dimensionality reduction to deep models [17, 18], leading to promising all-purpose visual features [19, 20]. Despite the primeval success of clustering approaches in image classification, very few works [21, 22] have been proposed to adapt them to the end-to-end training of convnets, and never at scale. An issue is that clustering methods have been primarily designed for linear models on top of fixed features, and they scarcely work if the features have to be learned simultaneously. For example, learning a convnet with -means would lead to a trivial solution where the features are zeroed, and the clusters are collapsed into a single entity.

Figure 1:

Illustration of the proposed method: we iteratively cluster deep features and use the cluster assignments as pseudo-labels to learn the parameters of the convnet.

In this work, we propose a novel clustering approach for the large scale end-to-end training of convnets. We show that it is possible to obtain useful general-purpose visual features with a clustering framework. Our approach, summarized in Figure 1, consists in alternating between clustering of the image descriptors and updating the weights of the convnet by predicting the cluster assignments. For simplicity, we focus our study on -means, but other clustering approaches can be used, like Power Iteration Clustering (PIC) [23]. The overall pipeline is sufficiently close to the standard supervised training of a convnet to reuse many common tricks [24]. Unlike self-supervised methods [25, 26, 27], clustering has the advantage of requiring little domain knowledge and no specific signal from the inputs [28, 29]. Despite its simplicity, our approach achieves significantly higher performance than previously published unsupervised methods on both ImageNet classification and transfer tasks.

Finally, we probe the robustness of our framework by modifying the experimental protocol, in particular the training set and the convnet architecture. The resulting set of experiments extends the discussion initiated by Doersch et al[25] on the impact of these choices on the performance of unsupervised methods. We demonstrate that our approach is robust to a change of architecture. Replacing an AlexNet by a VGG [30] significantly improves the quality of the features and their subsequent transfer performance. More importantly, we discuss the use of ImageNet as a training set for unsupervised models. While it helps understanding the impact of the labels on the performance of a network, ImageNet has a particular image distribution inherited from its use for a fine-grained image classification challenge: it is composed of well-balanced classes and contains a wide variety of dog breeds for example. We consider, as an alternative, random Flickr images from the YFCC100M dataset of Thomee et al[31]

. We show that our approach maintains state-of-the-art performance when trained on this uncured data distribution. Finally, current benchmarks focus on the capability of unsupervised convnets to capture class-level information. We propose to also evaluate them on image retrieval benchmarks to measure their capability to capture instance-level information.

In this paper, we make the following contributions: (i) a novel unsupervised method for the end-to-end learning of convnets that works with any standard clustering algorithm, like -means, and requires minimal additional steps; (ii) state-of-the-art performance on many standard transfer tasks used in unsupervised learning; (iii) performance above the previous state of the art when trained on an uncured image distribution; (iv) a discussion about the current evaluation protocol in unsupervised feature learning.

2 Related Work

Unsupervised learning of features. Several approaches related to our work learn deep models with no supervision. Coates and Ng [32] also use -means to pre-train convnets, but learn each layer sequentially in a bottom-up fashion, while we do it in an end-to-end fashion. Other clustering losses [21, 22, 33, 34] have been considered to jointly learn convnet features and image clusters but they have never been tested on a scale to allow a thorough study on modern convnet architectures. Of particular interest, Yang et al[21] iteratively learn convnet features and clusters with a recurrent framework. Their model offers promising performance on small datasets but may be challenging to scale to the number of images required for convnets to be competitive. Closer to our work, Bojanowski and Joulin [19] learn visual features on a large dataset with a loss that attempts to preserve the information flowing through the network [35]. Their approach discriminates between images in a similar way as examplar SVM [36], while we are simply clustering them.

Self-supervised learning.

A popular form of unsupervised learning, called “self-supervised learning” 

[37], uses pretext tasks to replace the labels annotated by humans by “pseudo-labels” directly computed from the raw input data. For example, Doersch et al[25] use the prediction of the relative position of patches in an image as a pretext task, while Noroozi and Favaro [26] train a network to spatially rearrange shuffled patches. Another use of spatial cues is the work of Pathak et al[38] where missing pixels are guessed based on their surrounding. Paulin et al[39] learn patch level Convolutional Kernel Network [40] using an image retrieval setting. Others leverage the temporal signal available in videos by predicting the camera transformation between consecutive frames [41], exploiting the temporal coherence of tracked patches [29] or segmenting video based on motion [27]

. Appart from spatial and temporal coherence, many other signals have been explored: image colorization 

[28, 42], cross-channel prediction [43], sound [44] or instance counting [45]. More recently, several strategies for combining multiple cues have been proposed [46, 47]. Contrary to our work, these approaches are domain dependent, requiring expert knowledge to carefully design a pretext task that may lead to transferable features.

Generative models.

Recently, unsupervised learning has been making a lot of progress on image generation. Typically, a parametrized mapping is learned between a predefined random noise and the images, with either an autoencoder 

[18, 48, 49, 50, 51], a generative adversarial network (GAN) [17] or more directly with a reconstruction loss [52]. Of particular interest, the discriminator of a GAN can produce visual features, but their performance are relatively disappointing [20]. Donahue et al[20] and Dumoulin et al[53] have shown that adding an encoder to a GAN produces visual features that are much more competitive.

3 Method

After a short introduction to the supervised learning of convnets, we describe our unsupervised approach as well as the specificities of its optimization.

3.1 Preliminaries

Modern approaches to computer vision, based on statistical learning, require good image featurization. In this context, convnets are a popular choice for mapping raw images to a vector space of fixed dimensionality. When trained on enough data, they constantly achieve the best performance on standard classification benchmarks 

[8, 54]. We denote by the convnet mapping, where is the set of corresponding parameters. We refer to the vector obtained by applying this mapping to an image as feature or representation. Given a training set of images, we want to find a parameter such that the mapping produces good general-purpose features.

These parameters are traditionally learned with supervision, i.e. each image is associated with a label in . This label represents the image’s membership to one of possible predefined classes. A parametrized classifier predicts the correct labels on top of the features . The parameters of the classifier and the parameter of the mapping are then jointly learned by optimizing the following problem:

(1)

where

is the multinomial logistic loss, also known as the negative log-softmax function. This cost function is minimized using mini-batch stochastic gradient descent 

[55]

and backpropagation to compute the gradient 

[56].

3.2 Unsupervised learning by clustering

When

is sampled from a Gaussian distribution, without any learning,

does not produce good features. However the performance of such random features on standard transfer tasks, is far above the chance level. For example, a multilayer perceptron classifier on top of the last convolutional layer of a random AlexNet achieves 12% in accuracy on ImageNet while the chance is at

 [26]. The good performance of random convnets is intimately tied to their convolutional structure which gives a strong prior on the input signal. The idea of this work is to exploit this weak signal to bootstrap the discriminative power of a convnet. We cluster the output of the convnet and use the subsequent cluster assignments as “pseudo-labels” to optimize Eq. (1). This deep clustering (DeepCluster) approach iteratively learns the features and groups them.

Clustering has been widely studied and many approaches have been developed for a variety of circumstances. In the absence of points of comparisons, we focus on a standard clustering algorithm, -means. Preliminary results with other clustering algorithms indicates that this choice is not crucial. -means takes a set of vectors as input, in our case the features produced by the convnet, and clusters them into distinct groups based on a geometric criterion. More precisely, it jointly learns a centroid matrix and the cluster assignments of each image by solving the following problem:

(2)

Solving this problem provides a set of optimal assignments and a centroid matrix . These assignments are then used as pseudo-labels; we make no use of the centroid matrix.

Overall, DeepCluster alternates between clustering the features to produce pseudo-labels using Eq. (2) and updating the parameters of the convnet by predicting these pseudo-labels using Eq. (1). This type of alternating procedure is prone to trivial solutions; we describe how to avoid such degenerate solutions in the next section.

3.3 Avoiding trivial solutions

The existence of trivial solutions is not specific to the unsupervised training of neural networks, but to any method that jointly learns a discriminative classifier and the labels. Discriminative clustering suffers from this issue even when applied to linear models [57]. Solutions are typically based on constraining or penalizing the minimal number of points per cluster [58, 59]. These terms are computed over the whole dataset, which is not applicable to the training of convnets on large scale datasets. In this section, we briefly describe the causes of these trivial solutions and give simple and scalable workarounds.

Empty clusters. A discriminative model learns decision boundaries between classes. An optimal decision boundary is to assign all of the inputs to a single cluster [57]. This issue is caused by the absence of mechanisms to prevent from empty clusters and arises in linear models as much as in convnets. A common trick used in feature quantization [60] consists in automatically reassigning empty clusters during the -means optimization. More precisely, when a cluster becomes empty, we randomly select a non-empty cluster and use its centroid with a small random perturbation as the new centroid for the empty cluster. We then reassign the points belonging to the non-empty cluster to the two resulting clusters.

Trivial parametrization. If the vast majority of images is assigned to a few clusters, the parameters will exclusively discriminate between them. In the most dramatic scenario where all but one cluster are singleton, minimizing Eq. (1) leads to a trivial parametrization where the convnet will predict the same output regardless of the input. This issue also arises in supervised classification when the number of images per class is highly unbalanced. For example, metadata, like hashtags, exhibits a Zipf distribution, with a few labels dominating the whole distribution [61]

. A strategy to circumvent this issue is to sample images based on a uniform distribution over the classes, or pseudo-labels. This is equivalent to weight the contribution of an input to the loss function in Eq. (

1) by the inverse of the size of its assigned cluster.

3.4 Implementation details

Convnet architectures. For comparison with previous works, we use a standard AlexNet [54] architecture. It consists of five convolutional layers with , , , and

filters; and of three fully connected layers. We remove the Local Response Normalization layers and use batch normalization 

[24]. We also consider a VGG-16 [30] architecture with batch normalization. Unsupervised methods often do not work directly on color and different strategies have been considered as alternatives  [25, 26]

. We apply a fixed linear transformation based on Sobel filters to remove color and increase local contrast 

[19, 39].

Training data. We train DeepCluster on ImageNet [6] unless mentioned otherwise. It contains M images uniformly distributed into  classes.

Optimization. We cluster the central cropped images features and perform data augmentation (random horizontal flips and crops of random sizes and aspect ratios) when training the network. This enforces invariance to data augmentation which is useful for feature learning [33]. The network is trained with dropout [62], a constant step size, an  penalization of the weights  and a momentum of . Each mini-batch contains images. For the clustering, features are PCA-reduced to dimensions, whitened and -normalized. We use the -means implementation of Johnson et al[60]. Note that running k-means takes a third of the time because a forward pass on the full dataset is needed. One could reassign the clusters every epochs, but we found out that our setup on ImageNet (updating the clustering every epoch) was nearly optimal. On Flickr, the concept of epoch disappears: choosing the tradeoff between the parameter updates and the cluster reassignments is more subtle. We thus kept almost the same setup as in ImageNet. We train the models for epochs, which takes days on a Pascal P100 GPU for AlexNet.

Hyperparameter selection.

We select hyperparameters on a down-stream task, 

i.e., object classification on the validation set of Pascal VOC with no fine-tuning. We use the publicly available code of Krähenbühl111https://github.com/philkr/voc-classification.

4 Experiments

In a preliminary set of experiments, we study the behavior of DeepCluster during training. We then qualitatively assess the filters learned with DeepCluster before comparing our approach to previous state-of-the-art models on standard benchmarks.

(a) Clustering quality
(b) Cluster reassignment
(c) Influence of k
Figure 2: Preliminary studies. LABEL:sub@fig:edges: evolution of the clustering quality along training epochs; LABEL:sub@fig:reassing: evolution of cluster reassignments at each clustering step; LABEL:sub@fig:choice-k: validation mAP classification performance for various choices of .
Figure 3: Filters from the first layer of an AlexNet trained on unsupervised ImageNet on raw RGB input (left) or after a Sobel filtering (right).

4.1 Preliminary study

We measure the information shared between two different assignments and of the same data by the Normalized Mutual Information (NMI), defined as:

where denotes the mutual information and the entropy. This measure can be applied to any assignment coming from the clusters or the true labels. If the two assignments and are independent, the NMI is equal to . If one of them is deterministically predictable from the other, the NMI is equal to .

Relation between clusters and labels. Figure LABEL:sub@fig:edges shows the evolution of the NMI between the cluster assignments and the ImageNet labels during training. It measures the capability of the model to predict class level information. Note that we only use this measure for this analysis and not in any model selection process. The dependence between the clusters and the labels increases over time, showing that our features progressively capture information related to object classes.

Number of reassignments between epochs. At each epoch, we reassign the images to a new set of clusters, with no guarantee of stability. Measuring the NMI between the clusters at epoch and gives an insight on the actual stability of our model. Figure LABEL:sub@fig:reassing shows the evolution of this measure during training. The NMI is increasing, meaning that there are less and less reassignments and the clusters are stabilizing over time. However, NMI saturates below , meaning that a significant fraction of images are regularly reassigned between epochs. In practice, this has no impact on the training and the models do not diverge.

Choosing the number of clusters. We measure the impact of the number of clusters used in -means on the quality of the model. We report the same down-stream task as in the hyperparameter selection process, i.e. mAP on the Pascal VOC 2007 classification validation set. We vary on a logarithmic scale, and report results after epochs in Figure LABEL:sub@fig:choice-k. The performance after the same number of epochs for every may not be directly comparable, but it reflects the hyper-parameter selection process used in this work. The best performance is obtained with . Given that we train our model on ImageNet, one would expect to yield the best results, but apparently some amount of over-segmentation is beneficial.

4.2 Visualizations

First layer filters. Figure 3 shows the filters from the first layer of an AlexNet trained with DeepCluster on raw RGB images and images preprocessed with a Sobel filtering. The difficulty of learning convnets on raw images has been noted before [19, 25, 26, 39]. As shown in the left panel of Fig. 3, most filters capture only color information that typically plays a little role for object classification [63]. Filters obtained with Sobel preprocessing act like edge detectors.

conv1 conv3 conv5
Figure 4: Filter visualization and top activated images from a subset of million images from YFCC100M for target filters in the layers conv1, conv3 and conv5 of an AlexNet trained with DeepCluster on ImageNet. The filter visualization is obtained by learning an input image that maximizes the response to a target filter [64].
Filter Filter Filter Filter
Filter Filter Filter Filter
Figure 5: Top activated images from a random subset of millions images from YFCC100M for target filters in the last convolutional layer. The top row corresponds to filters sensitive to activations by images containing objects. The bottom row exhibits filters more sensitive to stylistic effects. For instance, the filters and seem to be respectively excited by background blur and depth of field effects.

Probing deeper layers. We assess the quality of a target filter by learning an input image that maximizes its activation [65, 66]. We follow the process described by Yosinki et al[64] with a cross entropy function between the target filter and the other filters of the same layer. Figure 4 shows these synthetic images as well as the top activated images from a subset of million images from YFCC100M. As expected, deeper layers in the network seem to capture larger textural structures. However, some filters in the last convolutional layers seem to be simply replicating the texture already captured in previous layers, as shown on the second row of Fig. 5. This result corroborates the observation by Zhang et al[43] that features from conv3 or conv4 are more discriminative than those from conv5.

Finally, Figure 5 shows the top activated images of some conv5 filters that seem to be semantically coherent. The filters on the top row contain information about structures that highly corrolate with object classes. The filters on the bottom row seem to trigger on style, like drawings or abstract shapes.

4.3 Linear classification on activations

Following Zhang et al[43], we train a linear classifier on top of different frozen convolutional layers. This layer by layer comparison with supervised features exhibits where a convnet starts to be task specific, i.e. specialized in object classification. We report the results of this experiment on ImageNet and the Places dataset [67] in Table 1. We choose the hyperparameters by cross-validation on the training set. On ImageNet, DeepCluster outperforms the state of the art from conv3 to conv5 layers by . The largest improvement is observed in the conv4 layer, while the conv1

layer performs poorly, probably because the Sobel filtering discards color. Consistently with the filter visualizations of Sec. 

4.2, conv3 works better than conv5. Finally, the difference of performance between DeepCluster and a supervised AlexNet grows significantly on higher layers: at layers conv2-conv3 the difference is only around , but this difference rises to at conv5, marking where the AlexNet probably stores most of the class level information. In the supplementary material, we also report the accuracy if a MLP is trained on the last layer; DeepCluster outperforms the state of the art by .

ImageNet Places
Method conv1 conv2 conv3 conv4 conv5 conv1 conv2 conv3 conv4 conv5
Places labels
ImageNet labels
Random
Pathak et al[38]
Doersch et al[25]
Zhang et al[28]
Donahue et al[20]
Noroozi and Favaro [26] 18.2
Noroozi et al[45] 30.6 23.3 33.9
Zhang et al[43]
DeepCluster 38.2 39.8 36.1 37.0 37.5 33.1
Table 1: Linear classification on ImageNet and Places using activations from the convolutional layers of an AlexNet as features. We report classification accuracy on the central crop. Numbers for other methods are from Zhang et al[43].

The same experiment on the Places dataset provides some interesting insights: like DeepCluster, a supervised model trained on ImageNet suffers from a decrease of performance for higher layers (conv4 versus conv5). Moreover, DeepCluster yields conv3-4 features that are comparable to those trained with ImageNet labels. This suggests that when the target task is sufficently far from the domain covered by ImageNet, labels are less important.

4.4 Pascal Voc 2007

Finally, we do a quantitative evaluation of DeepCluster on image classification, object detection and semantic segmentation on Pascal VOC. The relatively small size of the training sets on Pascal VOC ( images) makes this setup closer to a “real-world” application, where a model trained with heavy computational resources, is adapted to a task or a dataset with a small number of instances. Detection results are obtained using fast-rcnn222https://github.com/rbgirshick/py-faster-rcnn; segmentation results are obtained using the code of Shelhamer et al. 333https://github.com/shelhamer/fcn.berkeleyvision.org. For classification and detection, we report the performance on the test set of Pascal VOC 2007 and choose our hyperparameters on the validation set. For semantic segmentation, following the related work, we report the performance on the validation set of Pascal VOC 2012.

Classification Detection Segmentation
Method fc6-8 all fc6-8 all fc6-8 all
ImageNet labels
Random-rgb
Random-sobel
Pathak et al[38]
Donahue et al[20]
Pathak et al[27]
Owens et al[44]
Wang and Gupta [29]
Doersch et al[25]
Bojanowski and Joulin [19]
Zhang et al[28]
Zhang et al[43]
Noroozi and Favaro [26]
Noroozi et al[45]
DeepCluster 70.4 73.7 51.4 55.4 43.2 45.1
Table 2: Comparison of the proposed approach to state-of-the-art unsupervised feature learning on classification, detection and segmentation on Pascal VOC. indicates the use of the data-dependent initialization of Krähenbühl et al[68]. Numbers for other methods produced by us are marked with a .

Table 2 summarized the comparisons of DeepCluster with other feature-learning approaches on the three tasks. As for the previous experiments, we outperform previous unsupervised methods on all three tasks, in every setting. The improvement with fine-tuning over the state of the art is the largest on semantic segmentation (). On detection, DeepCluster performs only slightly better than previously published methods. Interestingly, a fine-tuned random network performs comparatively to many unsupervised methods, but performs poorly if only fc6-8 are learned. For this reason, we also report detection and segmentation with fc6-8 for DeepCluster and a few baselines. These tasks are closer to a real application where fine-tuning is not possible. It is in this setting that the gap between our approach and the state of the art is the greater (up to on classification).

5 Discussion

The current standard for the evaluation of an unsupervised method involves the use of an AlexNet architecture trained on ImageNet and tested on class-level tasks. To understand and measure the various biases introduced by this pipeline on DeepCluster, we consider a different training set, a different architecture and an instance-level recognition task.

Classification Detection Segmentation
Method Training set fc6-8 all fc6-8 all fc6-8 all
Best competitor ImageNet
DeepCluster ImageNet
DeepCluster YFCC100M
Table 3: Impact of the training set on the performance of DeepCluster measured on the Pascal VOC transfer tasks as described in Sec. 4.4. We compare ImageNet with a subset of M images from YFCC100M [31]. Regardless of the training set, DeepCluster outperforms the best published numbers on most tasks. Numbers for other methods produced by us are marked with a

5.1 ImageNet versus YFCC100M

ImageNet is a dataset designed for a fine-grained object classification challenge [69]. It is object oriented, manually annotated and organised into well balanced object categories. By design, DeepCluster favors balanced clusters and, as discussed above, our number of cluster is somewhat comparable with the number of labels in ImageNet. This may have given an unfair advantage to DeepCluster over other unsupervised approaches when trained on ImageNet. To measure the impact of this effect, we consider a subset of randomly-selected M images from the YFCC100M dataset [31] for the pre-training. Statistics on the hashtags used in YFCC100M suggests that the underlying “object classes” are severly unbalanced [61], leading to a data distribution less favorable to DeepCluster.

Table 3 shows the difference in performance on Pascal VOC of DeepCluster pre-trained on YFCC100M compared to ImageNet. As noted by Doersch et al[25], this dataset is not object oriented, hence the performance are expected to drop by a few percents. However, even when trained on uncured Flickr images, DeepCluster outperforms the current state of the art by a significant margin on most tasks (up to on classification and on semantic segmentation). We report the rest of the results in the supplementary material with similar conclusions. This experiment validates that DeepCluster is robust to a change of image distribution, leading to state-of-the-art general-purpose visual features even if this distribution is not favorable to its design.

5.2 AlexNet versus VGG

In the supervised setting, deeper architectures like VGG or ResNet [8] have a much higher accuracy on ImageNet than AlexNet. We should expect the same improvement if these architectures are used with an unsupervised approach. Table 5 compares a VGG-16 and an AlexNet trained with DeepCluster on ImageNet and tested on the Pascal VOC 2007 object detection task with fine-tuning. We also report the numbers obtained with other unsupervised approaches [25, 46]. Regardless of the approach, a deeper architecture leads to a significant improvement in performance on the target task. Training the VGG-16 with DeepCluster gives a performance above the state of the art, bringing us to only percents below the supervised topline. Note that the difference between unsupervised and supervised approaches remains in the same ballpark for both architectures (i.e. ). Finally, the gap with a random baseline grows for larger architectures, justifying the relevance of unsupervised pre-training for complex architectures when little supervised data is available.

Method AlexNet VGG-16
ImageNet labels
Random
Doersch et al[25]
Wang and Gupta [29]
Wang et al[46]
DeepCluster 55.4 65.9
Table 4: Pascal VOC 2007 object detection with AlexNet and VGG-16. Numbers are taken from Wang et al[46].
Method OxfordK ParisK
ImageNet labels
Random
Doersch et al[25]
Wang et al[46]
DeepCluster 61.0 72.0
Table 5: mAP on instance-level image retrieval on Oxford and Paris dataset with a VGG-16. We apply R-MAC with a resolution of pixels and grid levels [70].

5.3 Evaluation on instance retrieval

The previous benchmarks measure the capability of an unsupervised network to capture class level information. They do not evaluate if it can differentiate images at the instance level. To that end, we propose image retrieval as a down-stream task. We follow the experimental protocol of Tolias et al[70] on two datasets, i.e., Oxford Buildings [71] and Paris [72]. Table 5 reports the performance of a VGG-16 trained with different approaches obtained with Sobel filtering, except for Doersch et al[25] and Wang et al[46]. This preprocessing improves by points the mAP of a supervised VGG-16 on the Oxford dataset, but not on Paris. This may translate in a similar advantage for DeepCluster, but it does not account for the average differences of points. Interestingly, random convnets perform particularly poorly on this task compared to pre-trained models. This suggests that image retrieval is a task where the pre-training is essential and studying it as a down-stream task could give further insights about the quality of the features produced by unsupervised approaches.

6 Conclusion

In this paper, we propose a scalable clustering approach for the unsupervised learning of convnets. It iterates between clustering with -means the features produced by the convnet and updating its weights by predicting the cluster assignments as pseudo-labels in a discriminative loss. If trained on large dataset like ImageNet or YFCC100M, it achieves performance that are significantly better than the previous state-of-the-art on every standard transfer task. Our approach makes little assumption about the inputs, and does not require much domain specific knowledge, making it a good candidate to learn deep representations specific to domains where annotations are scarce.

Acknowledgement.

We thank Alexandre Sablayrolles and the rest of the FAIR team for their feedback and fruitful discussion around this paper. We would like to particularly thank Ishan Misra for spotting an error in our evaluation setting of Table 1.

References

7 Additional results

7.1 Classification on ImageNet

Noroozi and Favaro [26] suggest to evaluate networks trained in an unsupervised way by freezing the convolutional layers and retrain on ImageNet the fully connected layers using labels and reporting accuracy on the validation set. This experiment follows the observation of Yosinki et al[73] that general-purpose features appear in the convolutional layers of an AlexNet. We report a comparison of DeepCluster to other AlexNet networks trained with no supervision, as well as random and supervised baselines in Table 6.

Method Pre-trained dataset Acc@1
Supervised ImageNet 59.7
Supervised Sobel ImageNet 57.8
Random - 12.0
Wang et al[29] YouTube100K [74] 29.8
Doersch et al[25] ImageNet 30.4
Donahue et al[20] ImageNet 32.2
Noroozi and Favaro [26] ImageNet 34.6
Zhang et al[28] ImageNet 35.2
Bojanowski and Joulin [19] ImageNet 36.0
DeepCluster ImageNet 44.0
DeepCluster YFCC100M 39.6
Table 6: Comparison of DeepCluster to AlexNet features pre-trained supervisedly and unsupervisedly on different datasets. A full multi-layer perceptron is retrained on top of the frozen pre-trained features. We report classification accuracy (acc@1). Expect for Noroozi and Favaro [26], all the numbers are taken from Bojanowski and Joulin [19].

DeepCluster outperforms state-of-the-art unsupervised methods by a significant margin, achieving better accuracy than the previous best performing method. This means that DeepCluster halves the gap with networks trained in a supervised setting.

7.2 Stopping criterion

We monitor how the features learned with DeepCluster evolve along the training epochs on a down-stream task: object classification on the validation set of Pascal VOC with no fine-tuning. We use this measure to select the hyperparameters of our model as well as to check when the features stop improving. In Figure 6, we show the evolution of both the classification accuracy on this task and a measure of the clustering quality (NMI between the cluster assignments and the true labels) throughout the training. Unsurprisingly, we notice that the clustering and features qualities follow the same dynamic. The performance saturates after epochs of training.

Figure 6: In red: validation mAP Pascal VOC classification performance. In blue: evolution of the clustering quality.

8 Further discussion

In this section, we discuss some technical choices and variants of DeepCluster more specifically.

8.1 Alternative clustering algorithm

Graph clustering. We consider Power Iteration Clustering (PIC) [23] as an alternative clustering method. It has been shown to yield good performance for large scale collections [75].

Since PIC is a graph clustering approach, we generate a nearest neighbor graph by connecting all images to their 5 neighbors in the Euclidean space of image descriptors. We denote by the output of the network with parameters applied to image . We use the sparse graph matrix . We set the diagonal of to 0 and non-zero entries are defined as

with a bandwidth parameter. In this work, we use a variant of PIC [75, 76] that does:

  1. Initialize ;

  2. Iterate

    where is a regularization parameter and the L1-normalization function;

  3. Let be the directed unweighted subgraph of where we keep edge of such that

    If is a local maximum (ie. ), then no edge starts from it. The clusters are given by the connected components of . Each cluster has one local maximum.

An advantage of PIC clustering is not to require the setting beforehand of the number of clusters. However, the parameter influences the number of clusters: when it is larger, the edges become more uniform and the number of clusters decreases, and the other way round when increased. In the following, we set .

As our PIC implementation relies on a graph of nearest neighbors, we show in Figure 7 some query images and their nearest neighbors in the feature space with a network trained with the PIC version of DeepCluster and a random baseline. A randomly initialized network performs quite well in some cases (sunset query for example), where the image has a simple low-level structure. The top row in Fig. 7 seems to represent a query for which the performance of the random network is quite good whereas the bottom query is too complex for the random network to retrieve good matches. Moreover, we notice that the nearest neighbors matches are largely improved after the network has been trained with DeepCluster.

Query Random DeepCluster PIC
Figure 7: Images and their nearest neighbors in a subset of Flickr in the feature space. The query images are shown on the left column. The following columns correspond to a randomly initialized network and the last to the same network after training with PIC DeepCluster.

Comparison with -means. First, we give an insight about the distribution of the images in the clusters. We show in Figure 8 the sizes of the clusters produced by the -means and PIC versions of DeepCluster at the last epoch of training (this distribution is stable along the epochs). We observe that -means produces more balanced clusters than PIC. Indeed, for PIC, almost one third of the clusters are of a size lower than while the biggest cluster contains roughly examples. In this situation of very unbalanced clusters, it is important in our method to train the convnet by sampling images based on a uniform distribution over the clusters to prevent the biggest cluster from dominating the training.

Figure 8: Sizes of clusters produced by the -means and PIC versions of DeepCluster at the last epoch of training.

We report in Table 7 the results for the different Pascal VOC transfer tasks with a model trained with the PIC version of DeepCluster. For this set of transfer tasks, the models trained with -means and PIC versions of DeepCluster perform in comparable ranges.

Clustering algorithm Classification Detection Segmentation
fc6-8 all fc6-8 all fc6-8 all
DeepCluster -means 72.0 73.7 51.4 55.4 43.2 45.1
DeepCluster PIC 71.0 73.0 53.6 54.4 42.4 43.8
Table 7: Evaluation of PIC versus -means on Pascal VOC transfer tasks.

8.2 Variants of DeepCluster

In Table 8, we report the results of models trained with different variants of DeepCluster: a different training set, an alternative clustering method or without input preprocessing. In particular, we notice that the performance of DeepCluster on raw RGB images degrades significantly.

ImageNet Places
Method conv1 conv2 conv3 conv4 conv5 conv1 conv2 conv3 conv4 conv5
Places labels
ImageNet labels
Random
Best competitor ImageNet 18.2 33.9
DeepCluster 41.0 38.2 39.2 39.8
DeepCluster YFCC100M
DeepCluster PIC 40.5 35.9
DeepCluster RGB 32.5 23.8
Table 8: Impact of different variants of our method on the performance of DeepCluster. We report classification accuracy of linear classifiers on ImageNet and Places using activations from the convolutional layers of an AlexNet as features. We compare the standard version of DeepCluster to different settings: YFCC100M as pre-training set; PIC as clustering method; raw RGB inputs.

In Table 9, we compare the performance of DeepCluster depending on the clustering method and the pre-training set. We evaluate this performance on three different classification tasks: with fine-tuning on Pascal VOC, without, and by retraining the full MLP on ImageNet. We report the classification accuracy on the validation set. Overall, we notice that the regular version of DeepCluster (with -means) yields better results than the PIC variant on both ImageNet and the uncured dataset YFCC100M.

Dataset Clustering Pascal VOC ImageNet
fc6-8 all fc6-8
ImageNet -means 72.0 73.7 44.0
ImageNet PIC 71.0 73.0 45.9
YFCC100M -means 67.3 69.3 39.6
YFCC100M PIC 66.0 69.0 38.6
Table 9: Performance of DeepCluster with different pre-training sets and clustering algorithms measured as classification accuracy on Pascal VOC and ImageNet.

9 Additional visualisation

9.1 Visualise VGG- features

We assess the quality of the representations learned by the VGG- convnet with DeepCluster. To do so, we learn an input image that maximises the mean activation of a target filter in the last convolutional layer. We also display the top images in a random subset of million images from Flickr that activate the target filter maximally. In Figure 9, we show some filters for which the top images seem to be semantically or stylistically coherent. We observe that the filters, learned without any supervision, capture quite complex structures. In particular, in Figure 10, we display synthetic images that correspond to filters that seem to focus on human characteristics.

Figure 9: Filter visualization and top 9 activated images (immediate right to the corresponding synthetic image) from a subset of 1 million images from YFCC100M for target filters in the last convolutional layer of a VGG- trained with DeepCluster.
Figure 10: Filter visualization by learning an input image that maximizes the response to a target filter [64] in the last convolutional layer of a VGG- convnet trained with DeepCluster. Here, we manually select filters that seem to trigger on human characteristics (eyes, noses, faces, fingers, fringes, groups of people or arms).

9.2 AlexNet

It is interesting to investigate what clusters the unsupervised learning approach actually learns. Fig. 2(a) in the paper suggests that our clusters are correlated with ImageNet classes. In Figure 11, we look at the purity of the clusters with the ImageNet ontology to see which concepts are learned. More precisely, we show the proportion of images belonging to a pure cluster for different synsets at different depths in the ImageNet ontology (pure cluster = more than 70% of images are from that synset). The correlation varies significantly between categories, with the highest correlations for animals and plants.


Figure 11: Proportion of images belonging to a pure cluster for different synsets in the ImageNet ontology (pure cluster = more than 70% of images are from that synset). We show synsets at depth 6, 9 and 13 in the ontology, with “zooms” on the animal and mammal synsets.

In Figure 12, we display the top 9 activated images from a random subset of 10 millions images from YFCC100M for the first target filters in the last convolutional layer (we selected filters whose top activated images do not depict humans). We observe that, even though our method is purely unsupervised, some filters trigger on images containing particular classes of objects. Some other filters respond strongly on particular stylistic effects or textures.

Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Filter Filter Filter Filter
Figure 12: Top activated images from a random subset of millions images from YFCC100M for the first filters in the last convolutional layer of an AlexNet trained with DeepCluster (we do not display humans).

10 Erratum [18/03/2019]

In the original version of the paper, we report classification accuracy averaged over crops for the linear classifier experiments on Places and ImageNet datasets. However, other methods report accuracy of the central crop so our comparison wasn’t fair. Nevertheless, it does not change the conclusion of these experiments, our approach still outperforms the state of the art from conv3 to conv5 layers. In Table 10, we show our results both for single and crops.

ImageNet Places
Method conv1 conv2 conv3 conv4 conv5 conv1 conv2 conv3 conv4 conv5
DeepCluster single crop
DeepCluster crops
Table 10: Linear classification on ImageNet and Places using activations from the convolutional layers of an AlexNet as features.