GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering

02/27/2020 ∙ by Chuang Niu, et al. ∙ Tencent Rensselaer Polytechnic Institute Xidian University 2

Deep clustering has achieved state-of-the-art results via joint representation learning and clustering, but still has an inferior performance for the real scene images, e.g., those in ImageNet. With such images, deep clustering methods face several challenges, including extracting discriminative features, avoiding trivial solutions, capturing semantic information, and performing on large-size image datasets. To address these problems, here we propose a self-supervised attention network for image clustering (AttentionCluster). Rather than extracting intermediate features first and then performing the traditional clustering algorithm, AttentionCluster directly outputs semantic cluster labels that are more discriminative than intermediate features and does not need further post-processing. To train the AttentionCluster in a completely unsupervised manner, we design four learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis, and attention mapping. Specifically, the transformation invariance and separability maximization tasks learn the relationships between sample pairs. The entropy analysis task aims to avoid trivial solutions. To capture the object-oriented semantics, we design a self-supervised attention mechanism that includes a parameterized attention module and a soft-attention loss. All the guiding signals for clustering are self-generated during the training process. Moreover, we develop a two-step learning algorithm that is training-friendly and memory-efficient for processing large-size images. Extensive experiments demonstrate the superiority of our proposed method in comparison with the state-of-the-art image clustering benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

AAE2015

Clustering is the process of separating data into groups according to sample similarity, which is a fundamental unsupervised learning task with numerous applications. Similarity or discrepancy measurement between samples plays a critical role in data clustering. Specifically, the similarity or discrepancy is determined by both data representation and distance function.

Before the extensive application of deep learning, handcrafted features, such as SIFT

[SIFT1999] and HoG [HoG2005]

, and domain-specific distance functions are often used to measure the similarity. Based on the similarity measurement, various rules were developed for clustering. These include space-partition based (e.g., k-means

[kmeans1967]

, spectral clustering

[Spectral2002]) and hierarchical methods (e.g., BIRCH [BIRCH1996]

). With the development of deep learning techniques, researchers have been dedicated to leverage deep neural networks for joint representation learning and clustering, which is commonly referred to as deep clustering. Although significant advances have been witnessed, deep clustering still suffers from an inferior performance for natural images (e.g., ImageNet

[ImageNet2015]) in comparison with that for simple handwritten digits in MNIST.

Figure 1: Clustering results on STL10. Each column represents a cluster. (a) Sample images clustered by the proposed model without attention, where the clustering principles focus on trivial cues, such as texture (first column), color (second column), or background (fifth column); and (b) Sample images clustered by the proposed model with attention, where the object concepts are well captured.

Various challenges arise when applying deep clustering on natural images. First, many deep clustering methods use stacked auto-encoders (SAE) [Bengio2007Greedy] to extract clustering-friendly intermediate features by imposing some constraints on the hidden layer and the output layer respectively. However, pixel-level reconstruction is not an effective constraint for extracting discriminative semantic features of natural images, since these images usually contain much more instance-specific details that are unrelated to semantics. Recent progresses [DAIC2017, ADC2019, Wu_2019_ICCV, IIC2019] have demonstrated that it is an effective way to directly map data to label features just as in the supervised classification task. However, training such a model in an unsupervised manner is difficult to extract clustering-related discriminative features. Second, clusters are expected to be defined by appropriate semantics (i.e., object-oriented concepts) while current methods tend to group the images by alternative principles (such as color, textures, or background), as shown in Fig. 1. Third, the dynamic change between different clustering principles during the training process tend to make the model unstable and easily get trapped at trivial solutions that assign all samples to single or very few clusters. Fourth, the existing methods were usually evaluated on small images ( to ). This is mainly due to the large batch of samples required for training the deep clustering model preventing us from processing large images, especially on memory-limited devices.

To tackle these problems, we propose a self-supervised attention network for clustering (AttentionCluster) that directly outputs discriminative semantic label features. To train the AttentionCluster in a completely unsupervised manner, we design four learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis, and attention mapping. All the guiding signals for clustering are self-generated in the training process. 1) The transformation invariance maximizes the similarity between a sample and its random transformations. 2) The separability maximization task explores both similarity and discrepancy of each paired samples to guide the model learning. 3) The entropy analysis task helps avoid trivial solutions. Different from the nonnegative and norm constraints imposed on label features in [DAIC2017], we impose the

constraint with a probability interpretation. Based on the probability constraint, samples are constrained to be evenly separated by maximizing the entropy, and thus the trivial solutions are avoided. 4) To capture semantic information to form concepts, an attention mechanism is proposed based on the observation that the discriminative information on objects is usually presented on local image regions. We design a parameterized attention module with a Gaussian kernel and a soft-attention loss that is highly sensitive to discriminative local regions.

For the evaluation of AttensionCluster on large-size images, we develop an efficient two-step learning strategy. First, the pseudo-targets over a large batch of samples are computed statistically in a split-and-merge manner. Second, the model is trained on the same batch in a supervised learning manner using the pseudo-targets and in a mini-batch way. It should be noted that AttentionCluster is trained by optimizing all loss functions simultaneously instead of alternately. Our learning algorithm is both training-friendly and memory-efficient and thus easy to process large images.

To summarize, the contributions of this paper include

(1) We propose a self-supervised attention network that is trained with four self-learning tasks in a completely unsupervised manner. The data can be partitioned into clusters directly according to semantic label features without further processing during inference.

(2) We design an entropy loss based on the probability constraint of label features, which helps avoid trivial solutions.

(3) We propose a parameterized attention module with a Gaussian kernel and a soft-attention loss to capture object concepts. To our best knowledge, this is the first attempt in exploring the attention mechanism for deep clustering.

(4) Our efficient learning algorithm makes it possible to perform the clustering on large-size images.

(5) Extensive experimental results demonstrate that the proposed AttentionCluster significantly outperforms or is comparable to the state-of-the-art methods on image clustering datasets. Our code will be made publicly available.

2 Related work

2.1 Deep Clustering

We divide the deep clustering methods into two categories: 1) intermediate-feature-based deep clustering and 2) semantic deep clustering. The first category extracts intermediate features and then conducts conventional clustering. The second one directly constructs a nonlinear mapping between original data and semantic label features. By doing so, the samples are clustered just as in the supervised classification task, without any need for additional processing.

Some intermediate-feature-based deep clustering methods usually employ SAE [Hinton2006, Bengio2007Greedy] or its variants [SDAE2010, CAE2011, AAE2015, VAE2013] to extract intermediate features, and then conduct k-means [DEN2014, DMC2017] or spectral clustering [DSCN2017]. Instead of performing representation learning and clustering separately, some studies integrate the two stages into a unified framework [Xie2016, LI2018161, DCN2016, DeepCluster2017, Zhang_2019_CVPR, DEPICT2017, VaDE2017, GMVAE, DASC2018]. However, as applied to real scene image datasets, the reconstruction loss of SAE tends to overestimate the importance of low-level features. In constrast to the SAE-based methods, some methods [Yang2016Joint, pmlr-v70-hu17b, CCNN2017]

directly use the convolutional neural network (CNN) or multi-layer perceptron (MLP) for representation learning by designing specific loss functions. Unfortunately, the high-dimensional nature of intermediate features are too abundant to effectively reveal the discriminative semantic information of natural images.

Semantic deep clustering methods have recently shown a great promise for clustering. To train such models in the unsupervised manner, various rules have been designed for supervision. DAC [DAIC2017]

recasts clustering into a binary pairwise-classification problem, and the supervised labels are adaptively generated by thresholding the similarity matrix based on label features. It has been theoretically proved that the learned label features are one-hot vectors ideally, and each bit corresponds to a semantic cluster. As an extension to DAC, DCCM

[Wu_2019_ICCV] investigates both pair-wise sample relations and triplet mutual information between deep and shallow layers. However, these two methods are practically susceptible to trivial solutions. IIC [IIC2019] directly trains a classification network by maximizing the mutual information between original data and their transformations. However, the computation of mutual information requires a very large batch size in the training process, which is challenging to apply on large images.

2.2 Self-supervised learning

Self-supervised learning can learn general features by optimizing cleverly designed objective functions of some pretext tasks, in which all supervised pseudo labels are automatically generated from the input data without manual annotations. Various pretext tasks were proposed, including image completion [inpain2016]

, image colorization

[colorization2016], jigsaw puzzle [jigsaw2016], counting [count2017], rotation [rotation2018], clustering [vf2018, Zhang_2019_CVPR], etc. For the pretext task of clustering, cluster assignments are often used as pseudo labels, which can be obtained by k-means or spectral clustering algorithms. In this work, both the self-generated relationship of paired samples and object attention are used as the guiding signals for clustering.

Figure 2: AttentionCluster framework. CNN is a convolutional neural network, GP means global pooling, Mul represents channel-independent multiplication, Conv is a convolution layer, FC is a fully connected layer, and AFG represents an attention feature generator.

2.3 Attention

In recent years, the attention mechanism has been successfully applied to various tasks in machine learning and computer vision, such as machine translation

[NIPS2017]

, image captioning and visual question answering

[Anderson2017Bottom], GAN [Zhang], person re-identification [Li2018], visual tracking [Wang2018], crowd counting [Liu2018], weakly- and semi-supervised semantic segmentation [Li2018Tell], and text detection and recognition [He2018]. Given the ground-truth labels, the attention weights are learned to scale-up more related local features for better predictions. However, it is still not explored for deep clustering models that are trained without human-annotated labels. In this work, we design a Gaussian-kernel-based attention module and a soft-attention loss for learning the attention weights in a self-supervised manner.

2.4 Learning algorithm of deep clustering

The learning algorithms for deep clustering can be categorized into two types: alternative learning and one-stage learning. Most existing deep clustering models are alternatively trained between updating cluster assignments and network parameters [Xie2016], or between different clustering heads [IIC2019]. Some of them need pre-training in an unsupervised [Xie2016, DCN2016, Zhang_2019_CVPR] or supervised manner [pmlr-v70-hu17b, CCNN2017]. On the other hand, some studies [DAIC2017, Wu_2019_ICCV] directly train the deep clustering models by optimizing all component objective functions simultaneously in one-stage. However, they do not consider the statistical constraint and are susceptible to trivial solutions. In this work, we propose a two-step learning algorithm that is training-friendly and memory-efficient and thus is capable of processing large batches and large images. Our algorithm belongs to the one-stage training methods.

Figure 3: Visualization of clustering results in successive training stages (from left to right) for 13K images in ImageNet-10. The results are visualized based on the predicted label features, and each point represents an image and the colors are rendered with the ground-truth label. The corresponding clustering accuracy is presented under each picture. The visualization mapping is similar to that in [DAIC2017], except that label features do not need normalization due to the probability constraint. Details can be found in the supplementary.

3 Method

3.1 Framework

Given a set of samples and the predefined cluster number , the proposed AttentionCluster network is to automatically divide into groups by predicting the label feature of each sample , where is the total number of samples. AttentionCluster consists of the following three components: 1) the image feature module, 2) the label feature module, and 3) the attention module, as shown in Figure 2

. The image feature module extracts image features with a fully convolutional network. The label feature module, which contains a convolutional layer, a global pooling layer and a fully-connected layer, aims to map the image features to label features for clustering. The attention module makes the model focus on discriminative local regions automatically, facilitating the capturing of object-oriented concepts. The attention module consists of three submodules, which are a fully connected layer for estimating the parameters of Gaussian kernel, an attention feature generator, and a global pooling layer followed by another fully connected layer for computing the attention label features. The attention feature generator has three inputs, i.e., the estimated parameter vector

, the convolutional feature from the label feature module, and the two-dimensional coordinates of the attention map that are self-generated according to the attention map size and .

In the training stage, we design four learning tasks driven by transformation invariance, separability maximization, entropy analysis and attention mapping. Specifically, the transformation invariance and separability maximization losses are computed with respect to the predicted label features, the attention loss is estimated with the attention module outputs, and the entropy loss is used to supervise both the label feature module and the attention module. For inference, the image feature module and label feature module are combined as a classifier to suggest the cluster assignments. The clustering results in successive training stages are visualized in Fig.

3.

3.2 Label feature constraint

We first review the label feature theory introduced in [Xie2016]. Clustering is recast as a binary classification problem for measuring the similarity and discrepancy between two samples and then determining whether they belong to the same cluster. For each sample , the label feature is computed, where is a mapping function with parameters . The parameters are obtained by minimizing the following objective function:

(1)

where is the ground-truth relation between sample and , i.e., indicates that and belong to the same cluster and otherwise; the inner product is the cosine distance between two samples due to the label feature is constrained as ; is a loss function instantiated by the binary cross entropy; and is the predefined cluster number. The theorem proved in [DAIC2017] indicates that if the optimal value of Eq. 1 is attained, the learned label features will be diverse one-hot vectors. Thus, the cluster identification of image can be directly obtained by selecting the maximum of label features, i.e., . However, it practically tends to obtain trivial solutions that assign all samples to a single or a few clusters. Theoretically, the trivial solutions are equivalent to re-define the cluster number as 1 or much smaller than , and they do minimize Eq. 1.

Based on the above analysis, we impose a probability constraint on the label features. Specifically, we reformulate Eq. 1 as:

(2)

It is equivalent to impose an extra -normalization and constraints before the -normalization in Eq. 1. Therefore, the aforementioned theorem is still valid for the label features under the Eq. 2. We can interpret as the probability of sample belongs to cluster .

3.3 Self-learning tasks

3.3.1 Transformation invariance task

An image after any practically reasonable transformations still reflect the same object. Hence, these transformed images should have similar feature representations. To learn such a similarity, the label feature of original sample is constrained to be close to its transformed counterpart of , where is a practically reasonable transformation function. In this work, the transformation function is predefined as the composition of random flipping, random affine transformation, and random color jittering, see Fig. 2. Specifically, the loss function is defined as

(3)

where is the target label feature of an original image that is recomputed as:

(4)

where is the number of samples, i.e., the batch size used in the training process. Eq. 4 can balance the sample assignments by dividing the cluster assignment frequency , preventing the empty clusters.

3.3.2 Separability maximization task

If the relationships between all pairs of samples are well captured, the label feature will be one-hot encoding vector as introduced in Section

3.2. However, the ground-truth relationships cannot be obtained in the unsupervised learning environment. Therefore, we evaluate the relationships of a batch of samples as follows:

(5)

where indicates that the samples and belong to the same cluster, indicates that the similarity of a sample to itself is 1. To get the cluster id , k-means algorithm is conducted on the set of samples based on the predicted label features.

The separability maximization task is to improve the purity of clusters by encouraging samples that are similar to be closer to each other while dissimilar samples to be further away from each other. The loss function is defined as:

(6)

where is the cosine distance.

3.3.3 Entropy analysis task

The entropy analysis task is designed to avoid trivial solutions. The samples are expected to be assigned into all the predefined clusters. Therefore, we maximize the entropy of the empirical probability distribution

over cluster assignments. Thus, the loss function is defined as

(7)

where is estimated with the predicted label features of samples, which can be a subset of the whole batch. Actually, maximizing the entropy will steer

towards a uniform distribution (denoted by

in Fig. 2).

3.3.4 Attention mapping task

The attention mapping task aims to make the model recognize the most discriminative local regions concerning the whole image semantic. The basic idea is that the response to the discriminative local regions should be more intense than that to the entire image. To this end, there are two problems to be solved: 1) how to design the attention module for localizing the discriminative local regions? and 2) how to train the attention module in a self-supervised manner?

With regard to the first problem, we design a two-dimensional Gaussian kernel to generate an attention map as:

(8)

where denotes the coordinate vector, denotes the parameters of the Gaussian kernel, is the mean vector that defines the most discriminative location, is the covariance matrix that defines the shape and size of a local region, is a predefined hyper parameter, and H and W are the height and width of the attention map. In our implementation, the coordinates are normalized over . Taking CNN features as input, a fully connected layer is used to estimate the parameter . Then, the model can focus on the discriminative local region by multiplying each channel of convolutional features with the attention map. The weighted features are mapped to the attention label features using a global pooling layer and a fully connected layer, as shown in Fig. 2

. It should be noted that there are also alternative designs of the attention module to generate attention maps, such as a convolution layer followed by a sigmoid function. However, we obtained better results with the parameterized Gaussian attention module due to that it has a much less number of parameters to be estimated, and the Gaussian attention map fits for capturing the object-oriented concepts of natural images.

With regard to the second problem, we define a soft-attention loss as

(9)
(10)

where is the output of the attention module, is the target label feature for regression, and is the same as in Eq. 4 to balance the cluster assignments. As defined in Eq. (10), the target label feature encourages the current high scores and suppresses low scores of the whole image label feature , thus making a more confident version of the whole image label feature , see Fig. 2 for demonstration. By doing so, the local image region, which is localized by the attention module, is discriminative in terms of the whole image semantics. In practice, the local region usually presents the expected object semantic as shown in Fig. 4.

3.4 Learning algorithm

We develop a two-step learning algorithm that combines all the self-learning tasks to train AttentionCluster in an unsupervised learning manner. The total loss function is

(11)

where the entropy loss is computed with the label features and predicted by the label feature module and attention module respectively, i.e., . are hyper parameters to weight the tasks.

The proposed two-step learning algorithm is presented in Algorithm 1. Since deep clustering methods usually require a large batch of samples for training, it is difficult to apply to large images on a memory-limited device. To tackle this problem, we divide the large-batch-based training process into two steps for each iteration. The first step statistically calculates the pseudo-targets for a large batch of samples using the model trained in the last iteration. To achieve this with a memory-limited device, we further split a large batch into sub-batches of samples and calculate the label features for each sub-batch independently. Then, all label features of samples are concatenated for computation of the pseudo labels. The second step trains the model just as in supervised learning with a mini-batch of samples iteratively.

Input: Dataset , ,
Output: Cluster label of
1 Randomly initialize network parameters w;
2 Initialize e = 0;
3 while 

total epoch number

 do
4        for  do
5               Select samples as from ;
6               Step-1:
7               for  do
8                      Select samples as from ;
9                      Calculate the label features of ;
10                     
11               end for
12              Concatenate all label features of samples ;
13               Calculate pseudo targets of with Eqs. 4, 5, and 10;
14               Step-2:
15               Randomly transform samples in as ;
16               for  do
17                      Randomly select samples as from [; ] ;
18                      Optimize w on by minimizing Eq. 11 using Adam ;
19                     
20               end for
21              
22        end for
23       
24 end while
25foreach  do
26        ;
27        ;
28       
29 end foreach
Algorithm 1 AttentionCluster learning algorithm.

4 Experiments and Results

4.1 Data

We evaluated our and others’ deep clustering methods on five datasets, including STL10 [STL2011] that contains 13K images of 10 clusters, ImageNet-10 [DAIC2017] that contains 13K images of 10 clusters, ImageNet-Dog [DAIC2017] that contains 19.5K images of 15 clusters of dogs, Cifar10 and Cifar100-20 [CIFAR2009]. The image size of ImageNet-10 and ImageNet-Dog is around . Cifar10 and Cifar100-20 both contain 60K images, and have 10 and 20 clusters respectively.

Method STL10 ImageNet-10 ImageNet-dog Cifar10 Cifar100-20
ACC NMI ARI ACC NMI ARI ACC NMI ARI ACC NM ARI ACC NM ARI
k-means [kmeans1967] 0.192 0.125 0.061 0.241 0.119 0.057 0.105 0.055 0.020 0.229 0.087 0.049 0.130 0.084 0.028
SC [Spectral2002] 0.159 0.098 0.048 0.274 0.151 0.076 0.111 0.038 0.013 0.247 0.103 0.085 0.136 0.090 0.022
AC [Pasi2006Fast] 0.332 0.239 0.140 0.242 0.138 0.067 0.139 0.037 0.021 0.228 0.105 0.065 0.138 0.098 0.034
NMF [NMF] 0.180 0.096 0.046 0.230 0.132 0.065 0.118 0.044 0.016 0.190 0.081 0.034 0.118 0.079 0.026
AE [Bengio2007Greedy] 0.303 0.250 0.161 0.317 0.210 0.152 0.185 0.104 0.073 0.314 0.239 0.169 0.165 0.100 0.048
SAE [Bengio2007Greedy] 0.320 0.252 0.161 0.335 0.212 0.174 0.183 0.113 0.073 0.297 0.247 0.156 0.157 0.109 0.044
SDAE [SDAE2010] 0.302 0.224 0.152 0.304 0.206 0.138 0.190 0.104 0.078 0.297 0.251 0.163 0.151 0.111 0.046
DeCNN [Zeiler2010Deconvolutional] 0.299 0.227 0.162 0.313 0.186 0.142 0.175 0.098 0.073 0.282 0.240 0.174 0.133 0.092 0.038
SWWAE [SWWAE2015] 0.270 0.196 0.136 0.324 0.176 0.160 0.159 0.094 0.076 0.284 0.233 0.164 0.147 0.103 0.039
CatGAN [catgan2016] 0.298 0.210 0.139 0.346 0.225 0.157 N/A N/A N/A 0.315 0.265 0.176 N/A N/A N/A
GMVAE [GMVAE] 0.282 0.200 0.146 0.334 0.193 0.168 N/A N/A N/A 0.291 0.245 0.167 N/A N/A N/A
JULE-SF [Yang2016Joint] 0.274 0.175 0.162 0.293 0.160 0.121 N/A N/A N/A 0.264 0.192 0.136 N/A N/A N/A
JULE-RC [Yang2016Joint] 0.277 0.182 0.164 0.300 0.175 0.138 0.138 0.054 0.028 0.272 0.192 0.138 0.137 0.103 0.033
DEC [Xie2016] 0.359 0.276 0.186 0.381 0.282 0.203 0.195 0.122 0.079 0.301 0.257 0.161 0.185 0.136 0.050
DAC [DAIC2017] 0.434 0.347 0.235 0.503 0.369 0.284 0.246 0.182 0.095 0.498 0.379 0.280 0.219 0.162 0.078
DAC [DAIC2017] 0.470 0.366 0.257 0.527 0.394 0.302 0.275 0.219 0.111 0.522 0.396 0.306 0.238 0.185 0.088
IIC [IIC2019] 0.499 N/A N/A N/A N/A N/A N/A N/A N/A 0.617 N/A N/A 0.257 N/A N/A
DCCM [Wu_2019_ICCV] 0.482 0.376 0.262 0.710 0.608 0.555 0.383 0.321 0.182 0.623 0.496 0.408 0.327 0.285 0.173
AttentionCluster 0.583 0.446 0.363 0.739 0.594 0.552 0.322 0.281 0.163 0.610 0.475 0.402 0.281 0.215 0.116
AttentionCluster-128 N/A N/A N/A 0.762 0.609 0.572 0.333 0.322 0.200 N/A N/A N/A N/A N/A N/A
Table 1: Comparison with the existing methods. AttentionCluster-128 resizes input images to for ImageNet-10 and ImageNet-Dog while other models take images as inputs. On Cifar10 and Cifar100, the input size is . The best three results are highlighted in bold .
Figure 4: Visualization of AttentionCluster on STL10. For each class, one example image, the predicted label features, and the attention map overlaid on the image are shown from left to right.

4.2 Implementation details

At the training stage, especially at the beginning, samples tend to be clustered by color cues. Therefore, we took grayscale images as inputs except for ImageNet-Dog, as color plays an important role in differentiating the sub-categories of dogs. It is noted that the images are converted to grayscale after applying the random color jittering during training. The

-normalization was implemented by a softmax layer. For simplicity, we assume

and, thus, there are only three parameters of Gaussian kernel to be estimated, i.e., . We used Adam to optimize the network parameters and the base learning rate was set to 0.001. We set the batch size to 1000 for STL10 and ImageNet-10, 1500 for ImageNet-Dog, 4000 for Cifar10, and 6000 for Cifar100-20. The sub-batch size in calculating pseudo targets can be adjusted according to the device memory and will not affect the results. The sub-batch size was 32 for all experiments. Hyper parameters , , , and were empirically set to 0.05, 5, 5, and 3 respectively.

4.3 Network architecture

In all experiments, we used the VGG-style convolutional network with batch normalization to implement the image feature extraction module. The main difference between the architectures for different experiments are the layer number, kernel-size, and the output cluster number. The details of these architectures can be found in the supplementary.

4.4 Evaluation metrics

We used three popular metrics to evaluate the performance of the involved clustering methods, including Adjusted Rand Index (ARI) [hubert1985comparing], Normalized Mutual Information (NMI) [strehl2002clusterensembles] and clustering Accuracy (ACC) [LiD06].

4.5 Comparison with existing methods

Table 1 presents a comparison with the existing methods. Under the same conditions, the proposed method significantly improves the clustering performance by 8%, 7%, and 10% approximately compared with the best of the others in terms of ACC, NMI and ARI on STL10. On ImageNet-10, ACC is improved by 5% compared with the strong baseline that is set by the most recently proposed DCCM [Wu_2019_ICCV]. On the sub-category dataset ImageNet-Dog, our method achieves results comparable to that of DCCM. Moreover, our method is capable of processing large images, and in that case the clustering results are further improved. On the small image datasets, i.e., Cifar10 and Cifar100-20, the proposed method also achieves comparable performance relative to the state-of-the-art. The above results strongly demonstrate the superiority of our proposed method.

4.6 Ablation study

To validate the effectiveness of each component, we conducted the ablation studies as shown in Table 2. Similar to [ADC2019]

, each variant was evaluated ten times and the best accuracy, average accuracy and the standard deviation are reported. Table

2 demonstrates that the best accuracy is achieved when all learning tasks are used with grayscale images. Particularly, the attention module improves the accuracy by up to 4.4 percent for the best accuracy and 4.3 percent for average accuracy. As shown in Figure 4, the attention module has the ability to localize the semantic objects and thus captures the expected concepts. In addition, the color information is a strong distraction for object clustering, and better clustering results can be obtained after the color images are changed to grayscale.

Because it is very easy to get trapped at trivial solutions if the entropy loss is missed, we do not show the results from the ablation of the entropy analysis task.

Method ACC NMI ARI
Best Mean Std Best Mean Std Best Mean Std
Color 0.556 0.517 0.034 0.427 0.402 0.022 0.341 0.298 0.031
No TI 0.576 0.546 0.016 0.435 0.417 0.012 0.347 0.325 0.014
No SM 0.579 0.529 0.029 0.438 0.412 0.019 0.356 0.310 0.024
No AM 0.539 0.494 0.020 0.416 0.383 0.015 0.316 0.282 0.013
Full setting 0.583 0.537 0.033 0.446 0.415 0.022 0.363 0.315 0.032
Table 2: Ablation study of AttentionCluster on STL10, where TI, SM and AM mean transformation invariance, separability maximization, and attention mappling respectively. Each algorithm variant was evaluated ten times.

4.7 Effectiveness of image size

To our best knowledge, (e.g., in STL10) is the biggest image size that has been used by the existing unsupervised clustering methods. However, images in the modern datasets usually have much larger sizes, which are not explored by unsupervised deep clustering methods. With the proposed two-step learning algorithm, we are able to process large images. An interesting question then arises: will large images help produce a better clustering accuracy? To answer this question, we explored the effect of image size on clustering results. Specifically, we used the ImageNet-10 in our experiments. We evaluated four input image sizes, i.e., , , , and by simply resizing the original images. For each image size, the model architecture is slightly different as described in the supplementary. We conducted five experimental trails for each image size and report the best and average accuracies as well as the standard deviation in Table 3. The results show that the clustering performance is significantly improved when the image size is increased from to . It is demonstrated that taking the larger images as inputs can be beneficial for clustering.

Practically, our proposed methods can be performed on much larger size of images. In this work, the clustering results are not further improved when the image size is larger than . However, we believe the proposed method is valuable for more complex image clustering problems in the future.

Image size ACC NMI ARI
Best Mean Std Best Mean Std Best Mean Std
0.739 0.708 0.031 0.594 0.581 0.012 0.552 0.529 0.019
0.762 0.735 0.020 0.609 0.592 0.013 0.572 0.544 0.023
0.712 0.669 0.033 0.567 0.511 0.043 0.500 0.453 0.039
0.738 0.608 0.067 0.612 0.474 0.071 0.559 0.405 0.079
Table 3: Clustering results of different image sizes on ImageNet-10. Each setting was evaluated five times.

4.8 Effectiveness of the attention map resolution

A hith-resolution attention map will provide precise location but weaken the global semantics. We evaluated the effect of the attention map resolution on the clustering results for Image-10. We set the input image size in this experiment to , and evaluate five attention map resolutions (in pixels): , , , , and as shown in Table 4. It is demonstrated that the attention map resolution of achieves the best results.

Resolution ACC NMI ARI
Best Mean Std Best Mean Std Best Mean Std
0.746 0.666 0.050 0.625 0.538 0.050 0.569 0.477 0.045
0.706 0.678 0.017 0.539 0.528 0.012 0.486 0.473 0.014
0.762 0.735 0.020 0.609 0.592 0.013 0.571 0.544 0.023
0.742 0.719 0.018 0.618 0.594 0.019 0.561 0.536 0.018
0.671 0.645 0.020 0.549 0.520 0.021 0.478 0.450 0.020
Table 4: Clustering results with different attention map resolutions on ImageNet-10. Each setting was evaluated five times.

5 Conclusion

For deep unsupervised clustering, we have proposed the AttentionCluster model to learn discriminative semantic label features with four self-learning tasks. Specifically, the transformation invariance and separability maximization tasks explore the similarity and discrepancy between samples. The designed attention mechanism facilitates the formation of object concepts during the training process. The entropy loss can effectively avoid trivial solutions. With all learning tasks, the developed two-step learning algorithm is both training-friendly and memory-efficient and thus is capable of processing large images. The AttentionCluster model has a potential for complex image clustering applications.

6 Acknowledgments

The research was supported by the National Natural Science Foundation of China (61976167, 61571353, U19B2030) and the Science and Technology Projects of Xi’an, China (201809170CX11JC12).

References