Neural Collaborative Subspace Clustering

04/24/2019 ∙ by Tong Zhang, et al. ∙ 0

We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we tackle the problem of subspace clustering, where we aim to cluster data points drawn from a union of low-dimensional subspaces in an unsupervised manner. Subspace Clustering (SC) has achieved great success in various computer vision tasks, such as motion segmentation 

(Kanatani, 2001; Elhamifar & Vidal, 2009; Ji et al., 2014a, 2016), face clustering (Ho et al., 2003; Elhamifar & Vidal, 2013) and image segmentation (Yang et al., 2008; Ma et al., 2007).

Majority of the SC algorithms (Yan & Pollefeys, 2006; Chen & Lerman, 2009; Elhamifar & Vidal, 2013; Liu et al., 2013; Wang et al., 2013; Lu et al., 2012; Ji et al., 2015; You et al., 2016a)

rely on the linear subspace assumption to construct the affinity matrix for spectral clustering. However, in many cases data do not naturally conform to linear models, which in turns results in the development of non-linear SC techniques. Kernel methods 

(Chen et al., 2009; Patel et al., 2013; Patel & Vidal, 2014; Yin et al., 2016; Xiao et al., 2016; Ji et al., 2017a)

can be employed to implicitly map data to higher dimensional spaces, hoping that data conform better to linear models in the resulting spaces. However and aside from the difficulties of choosing the right kernel function (and its parameters), there is no theoretical guarantee that such a kernel exists. The use of deep neural networks as non-linear mapping functions to determine subspace friendly latent spaces has formed the latest developments in the field with promising results 

(Ji et al., 2017b; Peng et al., 2016).

Despite significant improvements, SC algorithms still resort to spectral clustering which in hindsight requires constructing an affinity matrix. This step, albeit effective, hampers the scalability as it takes memory and computation for storing and decomposing the affinity matrix for data points and clusters. There are several attempts to resolve the scalability issue. For example, (You et al., 2016a) accelerate the construction of the affinity matrix using orthogonal matching pursuit; (Zhang et al., 2018) resort to -subspace clustering to avoid generating the affinity matrix. However, the scalability issue either remains due to the use of spectral clustering (You et al., 2016a), or mitigates but at the cost of performance (Zhang et al., 2018).

In this paper, we propose a neural structure to improve the performance of subspace clustering while being mindful to the scalablity issue. To this end, we first formulate subspace clustering as a classification problem, which in turn removes the spectral clustering step from the computations. Our neural model is comprised of two modules, one for classification and one for affinity learning. Both modules collaborate during learning, hence the name “Neural Collaborative Subspace Clustering”. During training and in each iteration, we use the affinity matrix generated by the subspace self-expressiveness to supervise the affinity matrix computed from the classification part. Concurrently, we make use of the classification part to improve self-expressiveness to build a better affinity matrix through collaborative optimization.

We evaluate our algorithm on three datasets , namely MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), and hardest one Stanford Online Products datasets (Oh Song et al., 2016) which exhibit different levels of difficulty. Our empirical study shows the superiority of the proposed algorithm over several state-of-the-art baselines including deep subspace clustering techniques.

2 Related Work

Figure 1: The Neural Collaborative Subspace Clustering framework. The affinity matrix generated by self-expressive layer, , and classifier, , supervise each other by selecting the high confidence parts for training. Red squares in highlight positive pairs (belonging to the same subspace). Conversely, red squares in highlight the negative pairs (belonging to different subspace). Affinities are coded with shades, meaning that light gray denotes large affinity while dark shades representing small affinities.

2.1 Subspace Clustering

Linear subspace clustering encompasses a vast set of techniques, among them, spectral clustering algorithms are more favored to cluster high-dimensional data 

(Vidal, 2011). One of the crucial challenges in employing spectral clustering on subspaces is the construction of an appropriate affinity matrix. We can categorize the algorithms based on the way the affinity matrix is constructed into three main groups: factorization based methods (Gruber & Weiss, 2004; Mo & Draper, 2012), model based methods (Chen & Lerman, 2009; Ochs et al., 2014; Purkait et al., 2014), self-expressiveness based methods (Elhamifar & Vidal, 2009; Ji et al., 2014b; Liu et al., 2013; Vidal & Favaro, 2014).

The latter, i.e

., self-expressiveness based methods have become dominant due to their elegant convex formulations and existence of theoretical analysis. The basic idea of subspace self-expressiveness is that one point can be represented in terms of a linear combination of other points from the same subspace. This leads to several advantages over other methods: (i) it is more robust to noise and outliers; (ii) the computational complexity of the self-expressiveness affinity does not grow exponentially with the number of subspaces and their dimensions; (iii) it also exploits the non-local information without the need of specifying the size of the neighborhood (

i.e., the number of nearest neighbors as usually used for identifying locally linear subspaces (Yan & Pollefeys, 2006; Zhang et al., 2012)).

The assumption of having linear subspaces does not necessarily hold in practical problems. Several works are proposed to tackle the situation where data points do not form linear subspaces but nonlinear ones. Kernel sparse subspace clustering (KSSC) (Patel & Vidal, 2014) and Kernel Low-rank representation (Xiao et al., 2016)

benefit from pre-defined kernel functions, such as polynomial or Radial Basis Functions (RBF), to cast the problem in high-dimensional (possibly infinite) reproducing kernel Hilbert spaces. However, it is still not clear how to choose proper kernel functions for different datasets and there is no guarantee that the feature spaces generated by kernel tricks are well-suited to linear subspace clustering.

Recently, Deep Subspace Clustering Networks (DSC-Net) (Ji et al., 2017b) are introduced to tackle the non-linearity arising in subspace clustering, where data is non-linearly mapped to a latent space with convolutional auto-encoders and a new self-expressive layer is introduced between the encoder and decoder to facilitate an end-to-end learning of the affinity matrix. Although DSC-Net outperforms traditional subspace clustering methods by large, their computational cost and memory footprint can become overwhelming even for mid-size problems.

There are a few attempts to tackle the scalability of subspace clustering. The SSC-Orthogonal Matching Pursuit (SSC-OMP) (You et al., 2016b) replaces the large scale convex optimization procedure with the OMP algorithm to represent the affinity matrix. However, SSC-OMP sacrifices the clustering performance in favor of speeding up the computations, and it still may fail when the number of data points is very large. -Subspace Clustering Networks (-SCN) (Zhang et al., 2018) is proposed to make subspace clustering applicable to large datasets. This is achieved via bypassing the construction of affinity matrix and consequently avoiding spectral clustering, and introducing the iterative method of -subspace clustering (Tseng, 2000; Bradley & Mangasarian, 2000) into a deep structure. Although -SCN develops two approaches to update the subspace and networks, it still shares the same drawbacks as iterative methods, for instance, it requires a good initialization, and seems fragile to outliers.

2.2 Model fitting

In learning theory, distinguishing outliers and noisy samples from clean ones to facilitate training is an active research topic. For example, Random Sample Consensus (RANSAC) (Fischler & Bolles, 1981) is a classical and well-received algorithm for fitting a model to a cloud of points corrupted by noise. Employing RANSAC on subspaces (Yang et al., 2006) in large-scale problems does not seem to be the right practice, as RANSAC requires a large number of iterations to achieve an acceptable fit.

Curriculum Learning (Bengio et al., 2009) begins learning a model from easy samples and gradually adapting the model to more complex ones, mimicking the cognitive process of humans. Ensemble Learning (Dietterich, 2000)

tries to improve the performance of machine learning algorithms by training different models and then to aggregate their predictions. Furthermore, distilling the knowledge learned from large deep learning models can be used to supervise a smaller model 

(Hinton et al., 2015). Although Curriculum Learning, Ensemble Learning and distilling knowledge are notable methods, adopting them to work on problems with limited annotations, yet aside the unlabeled scenario, is far-from clear.

2.3 Deep Clustering

Many research papers have explored clustering with deep neural networks. Deep Embedded clustering (DEC) (Xie et al., 2016) is one of the pioneers in this area, where the authors propose to pre-train a stacked auto-encoder (SAE) (Bengio et al., 2007) and fine-tune the encoder with a regularizer based on the student-t distribution to achieve cluster-friendly embeddings. On the downside, DEC is sensitive to the network structure and initialization. Various forms of Generative Adversarial Network (GAN) are employed for clustering such as Info-GAN (Chen et al., 2016) and ClusterGAN (Mukherjee et al., 2018), both of which intend to enforce the discriminative feature in the latent space to simultaneously generate and cluster images. The Deep Adaptive image Clustering (DAC) (Chang et al., 2017) uses fully convolutional neural nets (Springenberg et al., 2014)

as initialization to perform self-supervised learning, and achieves remarkable results on various clustering benchmarks. However, sensitivity to the network structure seems again to be a concern for DAC.

In this paper, we formulate subspace clustering as a binary classification problem through collaborative learning of two modules, one for image classification and the other for subspace affinity learning. Instead of performing spectral clustering on the whole dataset, we train our model in a stochastic manner, leading to a scalable paradigm for subspace clustering.

3 Proposed Method

To design a scalable SC algorithm, our idea is to identify whether a pair of points lies on the same subspace or not. Upon attaining such knowledge (for a large-enough set of pairs), a deep model can optimize its weights to maximize such relationships (lying on subspaces or not). This can be nicely cast as a binary classification problem. However, since ground-truth labels are not available to us, it is not obvious how such a classifier should be built and trained.

In this work, we propose to make use of two confidence maps (see Fig. 1 for a conceptual visualization) as a supervision signal for SC. To be more specific, we make use of the concept of self-expressiveness to identify positive pairs, i.e., pairs that lie on the same subspaces. To identify negative pairs, pairs that do not belong to same subspaces, we benefit from a negative confidence map. This, as we will show later, is due to the fact that the former can confidently mine positive pairs (with affinity close to 1) while the latter is good at localizing negative pairs (with affinity close to 0). The two confidence maps, not only provide the supervision signal to optimize a deep model, but act collaboratively as partial supervisions for each other.

3.1 Binary Classification

Given a dataset with points from clusters, we aim to train a classifier to predict class labels for data points without using the groundtruth labels. To this end, we propose to use a multi-class classifier which consists of a few convolutional layers (with non-linear rectifiers) and a softmax output layer. We then convert it to an affinity-based binary classifier by

(1)

where is a

dimensional prediction vector after

normalization. Ideally, when is one-hot, is a binary matrix encoding the confidence of data points belonging to the same cluster. So if we supervise the classifier using , we will end up with a binary classification problem. Also note that

can be interpreted as the cosine similarity between softmax prediction vectors of

and , which has been widely used in different contexts (Nguyen & Bai, 2010). However, unlike the cosine similarity which lies in , lies within , since the vectors are normalized by softmax and norm. We illustrate this in Fig. 2.


Figure 2: By normalizing the feature vectors after softmax function and computing their inner product, an affinity matrix can be generated to encode the clustering information.

3.2 Self-Expressiveness Affinity

Subspace self-expressiveness can be worded as: one data point drawn from linear subspaces can be represented by a linear combination of other points from the same subspace. Stacking all the points into columns of a data matrix , the self-expressiveness can be simply described as , where is the coefficient matrix.

It has been shown (e.g., (Ji et al., 2014b)) that by minimizing certain norms of coefficient matrix , a block-diagonal structure (up to certain permutations) on can be achieved. This translates into

only if data points coming from the same subspace. Therefore, the loss function of learning the affinity matrix can be written as:

(2)

where denotes a matrix norm. For example, Sparse Subspace Clustering (SSC) (Elhamifar & Vidal, 2009) sticks to the norm, Low Rank Representation (LRR) models (Liu & Yan, 2011; Vidal & Favaro, 2014) pick the nuclear norm, and Efficient Dense Subspace Clustering (Ji et al., 2014b) uses the norm. To handle data corruption, a relaxed version can be derived as:

(3)

Here, is a weighting parameter balancing the regularization term and the data fidelity term.

To handle subspace non-linearity, one can employ convolutional auto-encoders to non-linearly map input data to a latent space , and transfer the self-expressiveness into a linear layer (without non-linear activation and bias parameters) named self-expressive layer (Ji et al., 2017b) (see the bottom part of Fig. 1). This enables us to learn the subspace affinity in an end-to-end manner using the weight parameters in the self-expressive layer:

(4)

where is the maximum absolute value of off-diagonal entries of the current row. Note that then lies within .

3.3 Collaborative Learning

The purpose of collaborative learning is to find a principled way to exploit the advantages of different modules. The classification module and self-expressive module distill different information in the sense that the former tends to extract more abstract and discriminative features while the latter focuses more on capturing the pairwise correlation between data samples. From our previous discussion, ideally, the subspace affinity is nonzero only if and are from the same subspace, which means that can be used to mine similar pairs (i.e., positive samples). On the other hand, if the classification affinity is close to zero, it indicates strongly that and are dissimilar (i.e., negative sample). Therefore, we carefully design a mechanism to let both modules collaboratively supervise each other.

Given and , we pick up the high-confidence affinities as supervision for training. We illustrate this process in Fig. 1. The “positive confidence” in Fig. 1 denotes the ones from the same class, and the “negative confidence” represents the ones from different classes. As such, we select high affinities from and small affinities from , and formulate the collaborative learning problem as:

(5)

where the and denote the cross-entropy function with sample selection process, which can be defined as follows:

(6)

and

(7)

where is the indicator function returning or , are thresholding parameters, and is the entropy function, defined as .

Note that the cross-entropy loss is a non-symmetric metric function, where the former probability serves a supervisor to the latter. Therefore, in Eqn. (

6), the subspace affinity matrix is used as the “teacher” to supervise the classification part (the “student”). Conversely, in Eqn. (7), the classification affinity matrix works as the “teacher” to help the subspace affinity learning module to correct negative samples. However, to better facilitate gradient back-propagation between two modules, we can approximate indicator function by replacing with in Eqn. (6) and with in Eqn. (7). The weight parameter in Eqn. 5, called collaboration rate, controls the contributions of and . It can be set as the ratio of the number of positive confident pairs and the negative confident pairs, or slightly tuned for better performance.

3.4 Loss Function

After introducing all the building blocks of this work, we now explain how to jointly organize them in a network and train it with a carefully defined loss function. As shown in Fig. 1, our network is composed of four main parts: (i) a convolutional encoder that maps input data to a latent representation ; (ii) a linear self-expressive layer which learns the subspace affinity through weights ; (iii) a convolutional decoder that maps the data after self-expressive layer, i.e., , back to the input space ; (iv) a multi-class classifier that outputs dimensional prediction vectors, with which a classification affinity matrix can be constructed. Our loss function consists of two parts, i.e., collaborative learning loss and subspace learning loss, which can be written as:

(8)

where denotes the neural network parameters and a weight parameter for the collaborative learning loss. The is the loss to train the affinity matrix through self-expressive layer. Combining Eqn. (3) and the reconstruction loss of the convolutional auto-encoder, we arrive at:

(9)

where is a function of as defined in (4).

After the training stage, we no longer need to run the decoder and self-expressive layer to infer the labels. We can directly infer the cluster labels through the classifier output :

(10)

where is the cluster label of image .

4 Optimization and Training

  Input: dataset , number of clusters , sample selection threshold and , learning rate of auto-encoder , and learning rate of other parts
  Initialization: Pre-train the Convolutional Auto-encoder by minimizing the reconstruction error.
  repeat
     For every mini-batch data
     Train auto-encoder with self-expressive layer to minimize loss function in Eqn. (9) to update .
     Forward the batch data through the classifier to get .
     Do sample selection and collaborative learning through minimizing Eqn. (5) to update the classifier.
     Jointly update all the parameters by minimizing Eqn. (8).
  until

 reach the maximum epochs

  Output: Get the cluster for all samples by Eqn. (10)
Algorithm 1 Neural Collaborative Subspace Clustering

In this section, we provide more details about how training will be done. Similarly to other auto-encoder based clustering methods, we pretrain the auto-encoder by minimizing the reconstruction error to get a good initialization of latent space for subspace clustering.

According to (Elhamifar & Vidal, 2009), the solution to formulation (2) is guaranteed to have block-diagonal structure (up to certain permutations) under the assumption that the subspaces are independent. To account for this, we make sure that the dimensionality of the latent space () is greater than (the subspace intrinsic dimension) (number of clusters) 111Note that our algorithm does not require specifying subspace intrinsic dimensions explicitly. Empirically, we found a rough guess of the subspace intrinsic dimension would suffice, e.g., in most cases, we can set it to 9.

. In doing so, we make use of the stride convolution to down-sample the images while increasing the number of channels over layers to keep the latent space dimension large. Since we have pretrained the auto-encoder, we use a smaller learning rate in the auto-encoder when the collaborative learning is performed. Furthermore, compared to DSC-Net or other spectral clustering based methods which require to perform sophisticated techniques to post process the affinity matrix, we only need to compute

and normalize it (divided by the largest value in each row and assign to the diagonal entries) to ensure the subspace affinity matrix lie in the same range with the classification affinity matrix.

We adopt a three-stage training strategy: first, we train the auto-encoder together with the self-expressive layer using the loss in (9) to update the subspace affinity ; second, we train the classifier to minimize Eqn. (5); third, we jointly train the whole network to minimize the loss in (8). All these details are summarized in Algorithm 1.

5 Experiments

We implemented our framework with Tensorflow-1.6 

(Abadi et al., 2016) on a Nvidia TITAN X GPU. We mainly evaluate our method on three standard datasets, i.e., MNIST, Fashion-MNIST and the subset of Stanford Online Products dataset. All of these datasets are considered challenging for subspace clustering as it is hard to perform spectral clustering on datasets of this scale, and the linearity assumption is not valid. The number of clusters is set to 10 as input to all competing algorithms. For all the experiments, we pre-train the convolutional auto-encoder for 60 epochs with a learning rate and use it as initialization, then decrease the learning rate to in training stage.

The hyper parameters in our loss function are easy to tune. in Eqn. (9) controls self-expressiveness, and it also affects the choice of and in Eqn. (8). If set larger, the coefficient in affinity matrix will be larger, and in that case the should be higher. The other parameter balances the cost of subspace clustering and collaborative learning, and we usually set it to keep these two terms in the same scale to treat them equally. We keep the in all experiments, and slightly change the and for each dataset.

Our method is robust to different network design choices. We test different structures in our framework and get similar results on the same datasets. For MNIST, we use a three-layer convolutional encoder; for Fashion-MNIST and Stanford online Product, we use a deeper network consisting of three residual blocks (He et al., 2016)

. We do not use batch normalization in our network because it will corrupt the subspace structure that we want to learn in latent space. We use the Rectified Linear Unit (ReLu) as the non-linear activation in our all experiments.

Since there are no ground truth labels, we choose to use a larger batch size compared with supervised learning to make the training stable and robust. Specifically, we set the batch size to 5000, and use Adam (Kingma & Ba, 2014), an adaptive momentum based gradient descent method to minimize the loss for all our experiments. We set the learning rate to the auto-encoder and for other parts in all training stages.

Baseline Methods. We use various clustering methods as our baseline methods including the classic clustering methods, subspace clustering methods, deep clustering methods, and GAN based methods. Specifically, we have the following baselines:

  • [noitemsep,topsep=0pt]

  • classic methods: -Means (Lloyd, 1982) (KM), -Means with our CAE-feature (CAE-KM) and SAE-feature (SAE-KM);

  • subspace clustering algorithms: sparse subspace clustering (SSC) (Elhamifar & Vidal, 2013), Low Rank Representation (LRR) (Liu et al., 2013), Kernel Sparse Subspace Clustering (KSSC) (Patel & Vidal, 2014), Deep Subspace Clustering Network (DSC-Net) (Ji et al., 2017b), and -Subspace Clustering Network (-SCN) (Zhang et al., 2018);

  • deep clustering methods: Deep Embedded Clustering (DEC) (Xie et al., 2016), Deep Clustering Network (DCN) (Yang et al., 2017), and Deep Adaptive image Clustering (DAC) (Chang et al., 2017);

  • GAN based clustering methods: Info-GAN (Chen et al., 2016) and ClusterGAN (Mukherjee et al., 2018).

Evaluation Metric. For all quantitative evaluations, we make use of the unsupervised clustering accuracy rate, defined as

(11)

where is the ground-truth label, is the subspace assignment produced by the algorithm, and ranges over all possible one-to-one mappings between subspaces and labels. The mappings can be efficiently computed by the Hungarian algorithm. We also use normalized mutual information (NMI) as the additional quantitative standard. NMI scales from 0 to 1, where a smaller value means less correlation between predict label and ground truth label. Another quantitative metric is the adjusted Rand index (ARI), which is scaled between -1 and 1. It computes a similarity between two clusters by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in ground truth and predicted clusters. The larger the ARI, the better the clustering performance.

5.1 Mnist

MNIST consists of hand-written digit images of size

. Subspace non-linearity arises naturally for MNIST due to the variance of scale, thickness and orientation among all the images of each digit. We thus apply our method on this dataset to see how well it can handle this type of subspace non-linearity.

In this experiment, we use a three-layer convolutional auto-encoder and a self-expressive layer in between the auto-encoder for the subspace affinity learning module. The convolution kernel sizes are and channels are . For the classification module, we connect three more convolutional layers after the encoder layers with kernel size 2, and one convolutional layer with kernel size 1 to output the feature vector. For the threshold parameters and , we set them to and respectively in the first epoch of training, and increase to afterwards. Our algorithm took around 15 mins to finish training on a normal PC with one TITAN X GPU.

We report the clustering results of all competing methods in Table  1. Since spectral clustering based methods (i.e., SSC-CAE, LRR-CAE, KSSC-CAE, DSC-Net) can not apply on the whole dateset (due to memory and computation issue), we only use the 10000 samples to show how they perform. As shown in Table 1, subspace algorithms do not perform very well even on 10000 samples. Although the DSC-Net is trapped by training the self-expressive layer, it outperforms other subspace clustering algorithm, which shows the potential of learning subspace structure using neural networks. On the other hand, DEC, DCN, -SCN and our algorithm are all based on auto-encoder, which learn embeddings with different metrics to help clustering. However, our classification module boost our performance through making the latent space of auto-encoder more discriminative. Therefore, our algorithm incorporates the advantage of different classes, e.g., self-expressivess, nonlinear mapping and discriminativeness, and achieves the best results among all the algorithms thanks to the collaborative learning paradigm.

ACC(%) NMI(%) ARI(%)
CAE-KM 51.00 44.87 33.52
SAE-KM 81.29 73.78 67.00
KM 53.00 50.00 37.00
DEC 84.30 80.00 75.00
DCN 83.31 80.86 74.87
SSC-CAE 43.03 56.81 28.58
LRR-CAE 55.18 66.54 40.57
KSSC-CAE 58.48 67.74 49.38
DSC-Net 65.92 73.00 57.09
-SCN 87.14 78.15 75.81
Ours 94.09 86.12 87.52
Table 1: Clustering results of different methods on MNIST. For all quantitative metrics, the larger the better. The best results are shown in bold.

5.2 Fashion-MNIST

Same as in MINIST, Fashion-MNIST also has images of size . It consists of various types of fashion products. Unlike MNIST, every class in Fashion-MNIST has different styles with different gender groups (e.g., men, women, kids and neutral). As shown in Fig. 3, the high similarity between several classes (such as { Pullover, Coat, Shirt}, { T-shirt, Dress }) makes the clustering more difficult. Compared to MNIST, the Fashion-MNIST clearly poses more challenges for unsupervised clustering.

On Fashion-MNIST, we employ a network structure with one convolutional layer and three following residual blocks without batch normalization in the encoder, and with a symmetric structure in the decoder. As the complexity of dataset increases, we also raise the dimensionality of ambient space to better suit self-expressiveness, and increase capacity for the classification module. For all convolutional layers, we keep kernel size as 3 and set the number of channels to 10-20-30-40.

We report the clustering results of all methods in Table 2, where we can clearly see that our framework outperforms all the baselines by a large margin including the the best-performing baseline -SCN. Specifically, our method improves over the second one by , and in terms of accuracy, NMI and ARI. We can clearly observe from Fig. 4 that the latent space of our framework, which is collaboratively learned by subspace and classification modules, has strong subspace structure and also keeps each subspace discriminative. For subspace clustering methods we follow the way as on MNIST to use only 10000 samples. DSC-Net does not drop a lot while the performance of other subspace clustering algorithms decline sharply campared with their performance on MNIST. Since the code of ClusterGAN is not available currently, we can only provide results from their paper (without reporting ARI).

Figure 3: The data samples of the Fashion-Mnist Dataset

Figure 4: The visualization of our latent space through dimension reduction by PCA.
ACC(%) NMI(%) ARI(%)
SAE-KM 54.35 58.53 41.86
CAE-KM 39.84 39.80 25.93
KM 47.58 51.24 34.86
DEC 59.00 60.10 44.60
DCN 58.67 59.4 43.04
DAC 61.50 63.20 50.20
ClusterGAN 63.00 64.00 -
InfoGAN 61.00 59.00 44.20
SSC-CAE 35.87 18.10 13.46
LRR-CAE 34.48 25.41 10.33
KSSC-CAE 38.17 19.73 14.74
DSC-Net 60.62 61.71 48.20
-SCN 63.78 62.04 48.04
Ours 72.14 68.60 59.17
Table 2: Clustering results of different methods on Fashion-MNIST. For all quantity metrics, the larger the better. The best results are shown in bold.

5.3 Stanford Online Products

The Stanford Online Products dataset is designed for supervised metric learning, and it is thus considered to be difficult for unsupervised clustering. Compared to the previous two, the challenging aspects of this dataset include: (i) the product images contain various backgrounds, from pure white to real world environments; (ii) each product has different shapes, colors, scales and view angles; (iii) products across different classes may look similar to each other. To create a manageable dataset for clustering, we manually pick 10 classes out of 12 classes, with around 1000 images per class (10056 images in total), and then re-scale them to gray images, as shown Fig. 5.

Our networks for this dataset start from one layer convolutional kernel with 10 channels, and follow with three pre-activation residual blocks without batch normalization, which have 20, 30 and 10 channels respectively.

Table 3 shows the performance of all algorithms on this dataset. Due to the high difficulty of this dataset, most deep learning based methods fail to generate reasonable results. For example, DEC and DCN perform even worse than their initialization, and DAC can not self-supervise their model to achieve a better result. Similarly, infoGAN also fails to find enough clustering pattern. In contrast, our algorithm achieves better results compared to other algorithms, especially the deep learning based algorithms. Our algorithm along with KSSC and DSC-Net achieve top results, due to the handling of non-linearity. Constrained by the size of dataset, our algorithm does not greatly surpass the KSSC and DSC-Net. We can easily observe that subspace based clustering algorithms perform better than clustering methods. This illustrates how effective the underlying subspace assumption is in high dimension data space, and it should be considered to be a general tool to help clustering in large scale datasets.

In summary, compared to other deep learning methods, our framework is not sensitive to the architecture of neural networks, as long as the dimensionality meets the requirement of subspace self-expressiveness. Furthermore, the two modules in our network progressively improve the performance in a collaborative way, which is both effective and efficient.

ACC (%) NMI (%) ARI (%)
DEC 22.89 12.10 3.62
DCN 21.30 8.40 3.14
DAC 23.10 9.80 6.15
InfoGAN 19.76 8.15 3.79
SSC-CAE 12.66 0.73 0.19
LRR-CAE 22.35 17.36 4.04
KSSC-CAE 26.84 15.17 7.48
DSC-Net 26.87 14.56 8.75
-SCN 22.91 16.57 7.27
Ours 27.5 13.78 7.69
Table 3: The clustering results of different algorithms on subset of Stanford Online Products. The best results are in bold.

Figure 5: The data samples of the Stanford Online Products Dataset

6 Conclusion

In this work, we have introduced a novel learning paradigm, dubbed collaborative learning, for unsupervised subspace clustering. To this end, we have analyzed the complementary property of the classifier-induced affinities and the subspace-based affinities, and have further proposed a collaborative learning framework to train the network. Our network can be trained in a batch-by-batch manner and can directly predict the clustering labels (once trained) without performing spectral clustering. The experiments in our paper have shown that the proposed method outperforms the-state-of-art algorithms by a large margin on image clustering tasks, which validates the effectiveness of our framework.

References

  • Abadi et al. (2016) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
  • Bengio et al. (2007) Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. Greedy layer-wise training of deep networks. In NeurIPS, pp. 153–160, 2007.
  • Bengio et al. (2009) Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In ICML, pp. 41–48. ACM, 2009.
  • Bradley & Mangasarian (2000) Bradley, P. S. and Mangasarian, O. L. K-plane clustering. Journal of Global Optimization, 16(1):23–32, 2000.
  • Chang et al. (2017) Chang, J., Wang, L., Meng, G., Xiang, S., and Pan, C. Deep adaptive image clustering. In ICCV, pp. 5880–5888. IEEE, 2017.
  • Chen & Lerman (2009) Chen, G. and Lerman, G. Spectral curvature clustering (SCC). IJCV, 81(3):317–330, 2009.
  • Chen et al. (2009) Chen, G., Atev, S., and Lerman, G. Kernel spectral curvature clustering (KSCC). In ICCV Workshops, pp. 765–772. IEEE, 2009.
  • Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NeurIPS, pp. 2172–2180, 2016.
  • Dietterich (2000) Dietterich, T. G. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pp. 1–15. Springer, 2000.
  • Elhamifar & Vidal (2009) Elhamifar, E. and Vidal, R. Sparse subspace clustering. In CVPR, pp. 2790–2797, 2009.
  • Elhamifar & Vidal (2013) Elhamifar, E. and Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. TPAMI, 35(11):2765–2781, 2013.
  • Fischler & Bolles (1981) Fischler, M. A. and Bolles, R. C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
  • Gruber & Weiss (2004) Gruber, A. and Weiss, Y. Multibody factorization with uncertainty and missing data using the em algorithm. In CVPR, volume 1, pp. I–I. IEEE, 2004.
  • He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
  • Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  • Ho et al. (2003) Ho, J., Yang, M.-H., Lim, J., Lee, K.-C., and Kriegman, D. Clustering appearances of objects under varying illumination conditions. In CVPR, volume 1, pp. 11–18. IEEE, 2003.
  • Ji et al. (2014a) Ji, P., Li, H., Salzmann, M., and Dai, Y. Robust motion segmentation with unknown correspondences. In ECCV, pp. 204–219. Springer, 2014a.
  • Ji et al. (2014b) Ji, P., Salzmann, M., and Li, H. Efficient dense subspace clustering. In WACV, pp. 461–468. IEEE, 2014b.
  • Ji et al. (2015) Ji, P., Salzmann, M., and Li, H. Shape interaction matrix revisited and robustified: Efficient subspace clustering with corrupted and incomplete data. In ICCV, pp. 4687–4695, 2015.
  • Ji et al. (2016) Ji, P., Li, H., Salzmann, M., and Zhong, Y. Robust multi-body feature tracker: a segmentation-free approach. In CVPR, pp. 3843–3851, 2016.
  • Ji et al. (2017a) Ji, P., Reid, I. D., Garg, R., Li, H., and Salzmann, M. Adaptive low-rank kernel subspace clustering. 2017a.
  • Ji et al. (2017b) Ji, P., Zhang, T., Li, H., Salzmann, M., and Reid, I. Deep subspace clustering networks. In NeurIPS, pp. 23–32, 2017b.
  • Kanatani (2001) Kanatani, K.-i. Motion segmentation by subspace separation and model selection. In ICCV, volume 2, pp. 586–591. IEEE, 2001.
  • Kingma & Ba (2014) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
  • Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), ICML, pp. 1207–1216, Stanford, CA, 2000. Morgan Kaufmann.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Liu & Yan (2011) Liu, G. and Yan, S.

    Latent low-rank representation for subspace segmentation and feature extraction.

    In ICCV, pp. 1615–1622. IEEE, 2011.
  • Liu et al. (2013) Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., and Ma, Y. Robust recovery of subspace structures by low-rank representation. TPAMI, 35(1):171–184, 2013.
  • Lloyd (1982) Lloyd, S. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129–137, 1982.
  • Lu et al. (2012) Lu, C.-Y., Min, H., Zhao, Z.-Q., Zhu, L., Huang, D.-S., and Yan, S. Robust and efficient subspace segmentation via least squares regression. In ECCV, pp. 347–360. Springer, 2012.
  • Ma et al. (2007) Ma, Y., Derksen, H., Hong, W., and Wright, J. Segmentation of multivariate mixed data via lossy data coding and compression. TPAMI, 29(9), 2007.
  • Mo & Draper (2012) Mo, Q. and Draper, B. A. Semi-nonnegative matrix factorization for motion segmentation with missing data. In ECCV, pp. 402–415. Springer, 2012.
  • Mukherjee et al. (2018) Mukherjee, S., Asnani, H., Lin, E., and Kannan, S. Clustergan : Latent space clustering in generative adversarial networks. CoRR, abs/1809.03627, 2018.
  • Nguyen & Bai (2010) Nguyen, H. V. and Bai, L. Cosine similarity metric learning for face verification. In ACCV, pp. 709–720. Springer, 2010.
  • Ochs et al. (2014) Ochs, P., Malik, J., and Brox, T. Segmentation of moving objects by long term video analysis. TPAMI, 36(6):1187–1200, 2014.
  • Oh Song et al. (2016) Oh Song, H., Xiang, Y., Jegelka, S., and Savarese, S. Deep metric learning via lifted structured feature embedding. In CVPR, pp. 4004–4012, 2016.
  • Patel & Vidal (2014) Patel, V. M. and Vidal, R. Kernel sparse subspace clustering. In ICIP, pp. 2849–2853. IEEE, 2014.
  • Patel et al. (2013) Patel, V. M., Van Nguyen, H., and Vidal, R. Latent space sparse subspace clustering. In ICCV, pp. 225–232, 2013.
  • Peng et al. (2016) Peng, X., Xiao, S., Feng, J., Yau, W.-Y., and Yi, Z. Deep subspace clustering with sparsity prior. In IJCAI, 2016.
  • Purkait et al. (2014) Purkait, P., Chin, T.-J., Ackermann, H., and Suter, D. Clustering with hypergraphs: the case for large hyperedges. In ECCV, pp. 672–687. Springer, 2014.
  • Springenberg et al. (2014) Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
  • Tseng (2000) Tseng, P. Nearest q-flat to m points. Journal of Optimization Theory and Applications, 105(1):249–252, 2000.
  • Vidal (2011) Vidal, R. Subspace clustering. IEEE Signal Processing Magazine, 28(2):52–68, 2011.
  • Vidal & Favaro (2014) Vidal, R. and Favaro, P. Low rank subspace clustering (LRSC). 43:47–61, 2014.
  • Wang et al. (2013) Wang, Y.-X., Xu, H., and Leng, C. Provable subspace clustering: When LRR meets SSC. In NeurIPS, pp. 64–72, 2013.
  • Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
  • Xiao et al. (2016) Xiao, S., Tan, M., Xu, D., and Dong, Z. Y. Robust kernel low-rank representation. IEEE transactions on neural networks and learning systems, 27(11):2268–2281, 2016.
  • Xie et al. (2016) Xie, J., Girshick, R., and Farhadi, A.

    Unsupervised deep embedding for clustering analysis.

    In ICML, 2016.
  • Yan & Pollefeys (2006) Yan, J. and Pollefeys, M. A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In ECCV, pp. 94–106. Springer, 2006.
  • Yang et al. (2006) Yang, A. Y., Rao, S. R., and Ma, Y.

    Robust statistical estimation and segmentation of multiple subspaces.

    In null, pp.  99. IEEE, 2006.
  • Yang et al. (2008) Yang, A. Y., Wright, J., Ma, Y., and Sastry, S. S. Unsupervised segmentation of natural images via lossy data compression. CVIU, 110(2):212–225, 2008.
  • Yang et al. (2017) Yang, B., Fu, X., Sidiropoulos, N. D., and Hong, M.

    Towards k-means-friendly spaces: Simultaneous deep learning and clustering.

    In ICML, volume 70, pp. 3861–3870, 2017.
  • Yin et al. (2016) Yin, M., Guo, Y., Gao, J., He, Z., and Xie, S. Kernel sparse subspace clustering on symmetric positive definite manifolds. In CVPR, pp. 5157–5164, 2016.
  • You et al. (2016a) You, C., Li, C.-G., Robinson, D. P., and Vidal, R. Oracle based active set algorithm for scalable elastic net subspace clustering. In CVPR, pp. 3928–3937, 2016a.
  • You et al. (2016b) You, C., Robinson, D., and Vidal, R. Scalable sparse subspace clustering by orthogonal matching pursuit. In CVPR, pp. 3918–3927, 2016b.
  • Zhang et al. (2012) Zhang, T., Szlam, A., Wang, Y., and Lerman, G. Hybrid linear modeling via local best-fit flats. IJCV, 100(3):217–240, 2012.
  • Zhang et al. (2018) Zhang, T., Ji, P., Harandi, M., Hartley, R. I., and Reid, I. D. Scalable deep k-subspace clustering. CoRR, abs/1811.01045, 2018.