Clustering is the process of separating data into groups according to sample similarity, which is a fundamental unsupervised learning task with numerous applications. Similarity or discrepancy measurement between samples plays a critical role in data clustering. Specifically, the similarity or discrepancy is determined by both data representation and distance function.
Before the extensive application of deep learning, handcrafted features, such as SIFT[SIFT1999] and HoG [HoG2005]
, and domain-specific distance functions are often used to measure the similarity. Based on the similarity measurement, various rules were developed for clustering. These include space-partition based (e.g., k-means[kmeans1967]Spectral2002]) and hierarchical methods (e.g., BIRCH [BIRCH1996]
). With the development of deep learning techniques, researchers have been dedicated to leverage deep neural networks for joint representation learning and clustering, which is commonly referred to as deep clustering. Although significant advances have been witnessed, deep clustering still suffers from an inferior performance for natural images (e.g., ImageNet[ImageNet2015]) in comparison with that for simple handwritten digits in MNIST.
Various challenges arise when applying deep clustering on natural images. First, many deep clustering methods use stacked auto-encoders (SAE) [Bengio2007Greedy] to extract clustering-friendly intermediate features by imposing some constraints on the hidden layer and the output layer respectively. However, pixel-level reconstruction is not an effective constraint for extracting discriminative semantic features of natural images, since these images usually contain much more instance-specific details that are unrelated to semantics. Recent progresses [DAIC2017, ADC2019, Wu_2019_ICCV, IIC2019] have demonstrated that it is an effective way to directly map data to label features just as in the supervised classification task. However, training such a model in an unsupervised manner is difficult to extract clustering-related discriminative features. Second, clusters are expected to be defined by appropriate semantics (i.e., object-oriented concepts) while current methods tend to group the images by alternative principles (such as color, textures, or background), as shown in Fig. 1. Third, the dynamic change between different clustering principles during the training process tend to make the model unstable and easily get trapped at trivial solutions that assign all samples to single or very few clusters. Fourth, the existing methods were usually evaluated on small images ( to ). This is mainly due to the large batch of samples required for training the deep clustering model preventing us from processing large images, especially on memory-limited devices.
To tackle these problems, we propose a self-supervised attention network for clustering (AttentionCluster) that directly outputs discriminative semantic label features. To train the AttentionCluster in a completely unsupervised manner, we design four learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis, and attention mapping. All the guiding signals for clustering are self-generated in the training process. 1) The transformation invariance maximizes the similarity between a sample and its random transformations. 2) The separability maximization task explores both similarity and discrepancy of each paired samples to guide the model learning. 3) The entropy analysis task helps avoid trivial solutions. Different from the nonnegative and norm constraints imposed on label features in [DAIC2017], we impose the
constraint with a probability interpretation. Based on the probability constraint, samples are constrained to be evenly separated by maximizing the entropy, and thus the trivial solutions are avoided. 4) To capture semantic information to form concepts, an attention mechanism is proposed based on the observation that the discriminative information on objects is usually presented on local image regions. We design a parameterized attention module with a Gaussian kernel and a soft-attention loss that is highly sensitive to discriminative local regions.
For the evaluation of AttensionCluster on large-size images, we develop an efficient two-step learning strategy. First, the pseudo-targets over a large batch of samples are computed statistically in a split-and-merge manner. Second, the model is trained on the same batch in a supervised learning manner using the pseudo-targets and in a mini-batch way. It should be noted that AttentionCluster is trained by optimizing all loss functions simultaneously instead of alternately. Our learning algorithm is both training-friendly and memory-efficient and thus easy to process large images.
To summarize, the contributions of this paper include
(1) We propose a self-supervised attention network that is trained with four self-learning tasks in a completely unsupervised manner. The data can be partitioned into clusters directly according to semantic label features without further processing during inference.
(2) We design an entropy loss based on the probability constraint of label features, which helps avoid trivial solutions.
(3) We propose a parameterized attention module with a Gaussian kernel and a soft-attention loss to capture object concepts. To our best knowledge, this is the first attempt in exploring the attention mechanism for deep clustering.
(4) Our efficient learning algorithm makes it possible to perform the clustering on large-size images.
(5) Extensive experimental results demonstrate that the proposed AttentionCluster significantly outperforms or is comparable to the state-of-the-art methods on image clustering datasets. Our code will be made publicly available.
2 Related work
2.1 Deep Clustering
We divide the deep clustering methods into two categories: 1) intermediate-feature-based deep clustering and 2) semantic deep clustering. The first category extracts intermediate features and then conducts conventional clustering. The second one directly constructs a nonlinear mapping between original data and semantic label features. By doing so, the samples are clustered just as in the supervised classification task, without any need for additional processing.
Some intermediate-feature-based deep clustering methods usually employ SAE [Hinton2006, Bengio2007Greedy] or its variants [SDAE2010, CAE2011, AAE2015, VAE2013] to extract intermediate features, and then conduct k-means [DEN2014, DMC2017] or spectral clustering [DSCN2017]. Instead of performing representation learning and clustering separately, some studies integrate the two stages into a unified framework [Xie2016, LI2018161, DCN2016, DeepCluster2017, Zhang_2019_CVPR, DEPICT2017, VaDE2017, GMVAE, DASC2018]. However, as applied to real scene image datasets, the reconstruction loss of SAE tends to overestimate the importance of low-level features. In constrast to the SAE-based methods, some methods [Yang2016Joint, pmlr-v70-hu17b, CCNN2017]
directly use the convolutional neural network (CNN) or multi-layer perceptron (MLP) for representation learning by designing specific loss functions. Unfortunately, the high-dimensional nature of intermediate features are too abundant to effectively reveal the discriminative semantic information of natural images.
Semantic deep clustering methods have recently shown a great promise for clustering. To train such models in the unsupervised manner, various rules have been designed for supervision. DAC [DAIC2017]
recasts clustering into a binary pairwise-classification problem, and the supervised labels are adaptively generated by thresholding the similarity matrix based on label features. It has been theoretically proved that the learned label features are one-hot vectors ideally, and each bit corresponds to a semantic cluster. As an extension to DAC, DCCM[Wu_2019_ICCV] investigates both pair-wise sample relations and triplet mutual information between deep and shallow layers. However, these two methods are practically susceptible to trivial solutions. IIC [IIC2019] directly trains a classification network by maximizing the mutual information between original data and their transformations. However, the computation of mutual information requires a very large batch size in the training process, which is challenging to apply on large images.
2.2 Self-supervised learning
Self-supervised learning can learn general features by optimizing cleverly designed objective functions of some pretext tasks, in which all supervised pseudo labels are automatically generated from the input data without manual annotations. Various pretext tasks were proposed, including image completion [inpain2016]colorization2016], jigsaw puzzle [jigsaw2016], counting [count2017], rotation [rotation2018], clustering [vf2018, Zhang_2019_CVPR], etc. For the pretext task of clustering, cluster assignments are often used as pseudo labels, which can be obtained by k-means or spectral clustering algorithms. In this work, both the self-generated relationship of paired samples and object attention are used as the guiding signals for clustering.
, image captioning and visual question answering[Anderson2017Bottom], GAN [Zhang], person re-identification [Li2018], visual tracking [Wang2018], crowd counting [Liu2018], weakly- and semi-supervised semantic segmentation [Li2018Tell], and text detection and recognition [He2018]. Given the ground-truth labels, the attention weights are learned to scale-up more related local features for better predictions. However, it is still not explored for deep clustering models that are trained without human-annotated labels. In this work, we design a Gaussian-kernel-based attention module and a soft-attention loss for learning the attention weights in a self-supervised manner.
2.4 Learning algorithm of deep clustering
The learning algorithms for deep clustering can be categorized into two types: alternative learning and one-stage learning. Most existing deep clustering models are alternatively trained between updating cluster assignments and network parameters [Xie2016], or between different clustering heads [IIC2019]. Some of them need pre-training in an unsupervised [Xie2016, DCN2016, Zhang_2019_CVPR] or supervised manner [pmlr-v70-hu17b, CCNN2017]. On the other hand, some studies [DAIC2017, Wu_2019_ICCV] directly train the deep clustering models by optimizing all component objective functions simultaneously in one-stage. However, they do not consider the statistical constraint and are susceptible to trivial solutions. In this work, we propose a two-step learning algorithm that is training-friendly and memory-efficient and thus is capable of processing large batches and large images. Our algorithm belongs to the one-stage training methods.
Given a set of samples and the predefined cluster number , the proposed AttentionCluster network is to automatically divide into groups by predicting the label feature of each sample , where is the total number of samples. AttentionCluster consists of the following three components: 1) the image feature module, 2) the label feature module, and 3) the attention module, as shown in Figure 2
. The image feature module extracts image features with a fully convolutional network. The label feature module, which contains a convolutional layer, a global pooling layer and a fully-connected layer, aims to map the image features to label features for clustering. The attention module makes the model focus on discriminative local regions automatically, facilitating the capturing of object-oriented concepts. The attention module consists of three submodules, which are a fully connected layer for estimating the parameters of Gaussian kernel, an attention feature generator, and a global pooling layer followed by another fully connected layer for computing the attention label features. The attention feature generator has three inputs, i.e., the estimated parameter vector, the convolutional feature from the label feature module, and the two-dimensional coordinates of the attention map that are self-generated according to the attention map size and .
In the training stage, we design four learning tasks driven by transformation invariance, separability maximization, entropy analysis and attention mapping. Specifically, the transformation invariance and separability maximization losses are computed with respect to the predicted label features, the attention loss is estimated with the attention module outputs, and the entropy loss is used to supervise both the label feature module and the attention module. For inference, the image feature module and label feature module are combined as a classifier to suggest the cluster assignments. The clustering results in successive training stages are visualized in Fig.3.
3.2 Label feature constraint
We first review the label feature theory introduced in [Xie2016]. Clustering is recast as a binary classification problem for measuring the similarity and discrepancy between two samples and then determining whether they belong to the same cluster. For each sample , the label feature is computed, where is a mapping function with parameters . The parameters are obtained by minimizing the following objective function:
where is the ground-truth relation between sample and , i.e., indicates that and belong to the same cluster and otherwise; the inner product is the cosine distance between two samples due to the label feature is constrained as ; is a loss function instantiated by the binary cross entropy; and is the predefined cluster number. The theorem proved in [DAIC2017] indicates that if the optimal value of Eq. 1 is attained, the learned label features will be diverse one-hot vectors. Thus, the cluster identification of image can be directly obtained by selecting the maximum of label features, i.e., . However, it practically tends to obtain trivial solutions that assign all samples to a single or a few clusters. Theoretically, the trivial solutions are equivalent to re-define the cluster number as 1 or much smaller than , and they do minimize Eq. 1.
Based on the above analysis, we impose a probability constraint on the label features. Specifically, we reformulate Eq. 1 as:
It is equivalent to impose an extra -normalization and constraints before the -normalization in Eq. 1. Therefore, the aforementioned theorem is still valid for the label features under the Eq. 2. We can interpret as the probability of sample belongs to cluster .
3.3 Self-learning tasks
3.3.1 Transformation invariance task
An image after any practically reasonable transformations still reflect the same object. Hence, these transformed images should have similar feature representations. To learn such a similarity, the label feature of original sample is constrained to be close to its transformed counterpart of , where is a practically reasonable transformation function. In this work, the transformation function is predefined as the composition of random flipping, random affine transformation, and random color jittering, see Fig. 2. Specifically, the loss function is defined as
where is the target label feature of an original image that is recomputed as:
where is the number of samples, i.e., the batch size used in the training process. Eq. 4 can balance the sample assignments by dividing the cluster assignment frequency , preventing the empty clusters.
3.3.2 Separability maximization task
If the relationships between all pairs of samples are well captured, the label feature will be one-hot encoding vector as introduced in Section3.2. However, the ground-truth relationships cannot be obtained in the unsupervised learning environment. Therefore, we evaluate the relationships of a batch of samples as follows:
where indicates that the samples and belong to the same cluster, indicates that the similarity of a sample to itself is 1. To get the cluster id , k-means algorithm is conducted on the set of samples based on the predicted label features.
The separability maximization task is to improve the purity of clusters by encouraging samples that are similar to be closer to each other while dissimilar samples to be further away from each other. The loss function is defined as:
where is the cosine distance.
3.3.3 Entropy analysis task
The entropy analysis task is designed to avoid trivial solutions. The samples are expected to be assigned into all the predefined clusters. Therefore, we maximize the entropy of the empirical probability distributionover cluster assignments. Thus, the loss function is defined as
3.3.4 Attention mapping task
The attention mapping task aims to make the model recognize the most discriminative local regions concerning the whole image semantic. The basic idea is that the response to the discriminative local regions should be more intense than that to the entire image. To this end, there are two problems to be solved: 1) how to design the attention module for localizing the discriminative local regions? and 2) how to train the attention module in a self-supervised manner?
With regard to the first problem, we design a two-dimensional Gaussian kernel to generate an attention map as:
where denotes the coordinate vector, denotes the parameters of the Gaussian kernel, is the mean vector that defines the most discriminative location, is the covariance matrix that defines the shape and size of a local region, is a predefined hyper parameter, and H and W are the height and width of the attention map. In our implementation, the coordinates are normalized over . Taking CNN features as input, a fully connected layer is used to estimate the parameter . Then, the model can focus on the discriminative local region by multiplying each channel of convolutional features with the attention map. The weighted features are mapped to the attention label features using a global pooling layer and a fully connected layer, as shown in Fig. 2
. It should be noted that there are also alternative designs of the attention module to generate attention maps, such as a convolution layer followed by a sigmoid function. However, we obtained better results with the parameterized Gaussian attention module due to that it has a much less number of parameters to be estimated, and the Gaussian attention map fits for capturing the object-oriented concepts of natural images.
With regard to the second problem, we define a soft-attention loss as
where is the output of the attention module, is the target label feature for regression, and is the same as in Eq. 4 to balance the cluster assignments. As defined in Eq. (10), the target label feature encourages the current high scores and suppresses low scores of the whole image label feature , thus making a more confident version of the whole image label feature , see Fig. 2 for demonstration. By doing so, the local image region, which is localized by the attention module, is discriminative in terms of the whole image semantics. In practice, the local region usually presents the expected object semantic as shown in Fig. 4.
3.4 Learning algorithm
We develop a two-step learning algorithm that combines all the self-learning tasks to train AttentionCluster in an unsupervised learning manner. The total loss function is
where the entropy loss is computed with the label features and predicted by the label feature module and attention module respectively, i.e., . are hyper parameters to weight the tasks.
The proposed two-step learning algorithm is presented in Algorithm 1. Since deep clustering methods usually require a large batch of samples for training, it is difficult to apply to large images on a memory-limited device. To tackle this problem, we divide the large-batch-based training process into two steps for each iteration. The first step statistically calculates the pseudo-targets for a large batch of samples using the model trained in the last iteration. To achieve this with a memory-limited device, we further split a large batch into sub-batches of samples and calculate the label features for each sub-batch independently. Then, all label features of samples are concatenated for computation of the pseudo labels. The second step trains the model just as in supervised learning with a mini-batch of samples iteratively.
4 Experiments and Results
We evaluated our and others’ deep clustering methods on five datasets, including STL10 [STL2011] that contains 13K images of 10 clusters, ImageNet-10 [DAIC2017] that contains 13K images of 10 clusters, ImageNet-Dog [DAIC2017] that contains 19.5K images of 15 clusters of dogs, Cifar10 and Cifar100-20 [CIFAR2009]. The image size of ImageNet-10 and ImageNet-Dog is around . Cifar10 and Cifar100-20 both contain 60K images, and have 10 and 20 clusters respectively.
4.2 Implementation details
At the training stage, especially at the beginning, samples tend to be clustered by color cues. Therefore, we took grayscale images as inputs except for ImageNet-Dog, as color plays an important role in differentiating the sub-categories of dogs. It is noted that the images are converted to grayscale after applying the random color jittering during training. The
-normalization was implemented by a softmax layer. For simplicity, we assumeand, thus, there are only three parameters of Gaussian kernel to be estimated, i.e., . We used Adam to optimize the network parameters and the base learning rate was set to 0.001. We set the batch size to 1000 for STL10 and ImageNet-10, 1500 for ImageNet-Dog, 4000 for Cifar10, and 6000 for Cifar100-20. The sub-batch size in calculating pseudo targets can be adjusted according to the device memory and will not affect the results. The sub-batch size was 32 for all experiments. Hyper parameters , , , and were empirically set to 0.05, 5, 5, and 3 respectively.
4.3 Network architecture
In all experiments, we used the VGG-style convolutional network with batch normalization to implement the image feature extraction module. The main difference between the architectures for different experiments are the layer number, kernel-size, and the output cluster number. The details of these architectures can be found in the supplementary.
4.4 Evaluation metrics
We used three popular metrics to evaluate the performance of the involved clustering methods, including Adjusted Rand Index (ARI) [hubert1985comparing], Normalized Mutual Information (NMI) [strehl2002clusterensembles] and clustering Accuracy (ACC) [LiD06].
4.5 Comparison with existing methods
Table 1 presents a comparison with the existing methods. Under the same conditions, the proposed method significantly improves the clustering performance by 8%, 7%, and 10% approximately compared with the best of the others in terms of ACC, NMI and ARI on STL10. On ImageNet-10, ACC is improved by 5% compared with the strong baseline that is set by the most recently proposed DCCM [Wu_2019_ICCV]. On the sub-category dataset ImageNet-Dog, our method achieves results comparable to that of DCCM. Moreover, our method is capable of processing large images, and in that case the clustering results are further improved. On the small image datasets, i.e., Cifar10 and Cifar100-20, the proposed method also achieves comparable performance relative to the state-of-the-art. The above results strongly demonstrate the superiority of our proposed method.
4.6 Ablation study
To validate the effectiveness of each component, we conducted the ablation studies as shown in Table 2. Similar to [ADC2019]
, each variant was evaluated ten times and the best accuracy, average accuracy and the standard deviation are reported. Table2 demonstrates that the best accuracy is achieved when all learning tasks are used with grayscale images. Particularly, the attention module improves the accuracy by up to 4.4 percent for the best accuracy and 4.3 percent for average accuracy. As shown in Figure 4, the attention module has the ability to localize the semantic objects and thus captures the expected concepts. In addition, the color information is a strong distraction for object clustering, and better clustering results can be obtained after the color images are changed to grayscale.
Because it is very easy to get trapped at trivial solutions if the entropy loss is missed, we do not show the results from the ablation of the entropy analysis task.
4.7 Effectiveness of image size
To our best knowledge, (e.g., in STL10) is the biggest image size that has been used by the existing unsupervised clustering methods. However, images in the modern datasets usually have much larger sizes, which are not explored by unsupervised deep clustering methods. With the proposed two-step learning algorithm, we are able to process large images. An interesting question then arises: will large images help produce a better clustering accuracy? To answer this question, we explored the effect of image size on clustering results. Specifically, we used the ImageNet-10 in our experiments. We evaluated four input image sizes, i.e., , , , and by simply resizing the original images. For each image size, the model architecture is slightly different as described in the supplementary. We conducted five experimental trails for each image size and report the best and average accuracies as well as the standard deviation in Table 3. The results show that the clustering performance is significantly improved when the image size is increased from to . It is demonstrated that taking the larger images as inputs can be beneficial for clustering.
Practically, our proposed methods can be performed on much larger size of images. In this work, the clustering results are not further improved when the image size is larger than . However, we believe the proposed method is valuable for more complex image clustering problems in the future.
4.8 Effectiveness of the attention map resolution
A hith-resolution attention map will provide precise location but weaken the global semantics. We evaluated the effect of the attention map resolution on the clustering results for Image-10. We set the input image size in this experiment to , and evaluate five attention map resolutions (in pixels): , , , , and as shown in Table 4. It is demonstrated that the attention map resolution of achieves the best results.
For deep unsupervised clustering, we have proposed the AttentionCluster model to learn discriminative semantic label features with four self-learning tasks. Specifically, the transformation invariance and separability maximization tasks explore the similarity and discrepancy between samples. The designed attention mechanism facilitates the formation of object concepts during the training process. The entropy loss can effectively avoid trivial solutions. With all learning tasks, the developed two-step learning algorithm is both training-friendly and memory-efficient and thus is capable of processing large images. The AttentionCluster model has a potential for complex image clustering applications.
The research was supported by the National Natural Science Foundation of China (61976167, 61571353, U19B2030) and the Science and Technology Projects of Xi’an, China (201809170CX11JC12).