1 Introduction
In this paper, we tackle the problem of subspace clustering, where we aim to cluster data points drawn from a union of lowdimensional subspaces in an unsupervised manner. Subspace Clustering (SC) has achieved great success in various computer vision tasks, such as motion segmentation
(Kanatani, 2001; Elhamifar & Vidal, 2009; Ji et al., 2014a, 2016), face clustering (Ho et al., 2003; Elhamifar & Vidal, 2013) and image segmentation (Yang et al., 2008; Ma et al., 2007).Majority of the SC algorithms (Yan & Pollefeys, 2006; Chen & Lerman, 2009; Elhamifar & Vidal, 2013; Liu et al., 2013; Wang et al., 2013; Lu et al., 2012; Ji et al., 2015; You et al., 2016a)
rely on the linear subspace assumption to construct the affinity matrix for spectral clustering. However, in many cases data do not naturally conform to linear models, which in turns results in the development of nonlinear SC techniques. Kernel methods
(Chen et al., 2009; Patel et al., 2013; Patel & Vidal, 2014; Yin et al., 2016; Xiao et al., 2016; Ji et al., 2017a)can be employed to implicitly map data to higher dimensional spaces, hoping that data conform better to linear models in the resulting spaces. However and aside from the difficulties of choosing the right kernel function (and its parameters), there is no theoretical guarantee that such a kernel exists. The use of deep neural networks as nonlinear mapping functions to determine subspace friendly latent spaces has formed the latest developments in the field with promising results
(Ji et al., 2017b; Peng et al., 2016).Despite significant improvements, SC algorithms still resort to spectral clustering which in hindsight requires constructing an affinity matrix. This step, albeit effective, hampers the scalability as it takes memory and computation for storing and decomposing the affinity matrix for data points and clusters. There are several attempts to resolve the scalability issue. For example, (You et al., 2016a) accelerate the construction of the affinity matrix using orthogonal matching pursuit; (Zhang et al., 2018) resort to subspace clustering to avoid generating the affinity matrix. However, the scalability issue either remains due to the use of spectral clustering (You et al., 2016a), or mitigates but at the cost of performance (Zhang et al., 2018).
In this paper, we propose a neural structure to improve the performance of subspace clustering while being mindful to the scalablity issue. To this end, we first formulate subspace clustering as a classification problem, which in turn removes the spectral clustering step from the computations. Our neural model is comprised of two modules, one for classification and one for affinity learning. Both modules collaborate during learning, hence the name “Neural Collaborative Subspace Clustering”. During training and in each iteration, we use the affinity matrix generated by the subspace selfexpressiveness to supervise the affinity matrix computed from the classification part. Concurrently, we make use of the classification part to improve selfexpressiveness to build a better affinity matrix through collaborative optimization.
We evaluate our algorithm on three datasets , namely MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), and hardest one Stanford Online Products datasets (Oh Song et al., 2016) which exhibit different levels of difficulty. Our empirical study shows the superiority of the proposed algorithm over several stateoftheart baselines including deep subspace clustering techniques.
2 Related Work
2.1 Subspace Clustering
Linear subspace clustering encompasses a vast set of techniques, among them, spectral clustering algorithms are more favored to cluster highdimensional data
(Vidal, 2011). One of the crucial challenges in employing spectral clustering on subspaces is the construction of an appropriate affinity matrix. We can categorize the algorithms based on the way the affinity matrix is constructed into three main groups: factorization based methods (Gruber & Weiss, 2004; Mo & Draper, 2012), model based methods (Chen & Lerman, 2009; Ochs et al., 2014; Purkait et al., 2014), selfexpressiveness based methods (Elhamifar & Vidal, 2009; Ji et al., 2014b; Liu et al., 2013; Vidal & Favaro, 2014).The latter, i.e
., selfexpressiveness based methods have become dominant due to their elegant convex formulations and existence of theoretical analysis. The basic idea of subspace selfexpressiveness is that one point can be represented in terms of a linear combination of other points from the same subspace. This leads to several advantages over other methods: (i) it is more robust to noise and outliers; (ii) the computational complexity of the selfexpressiveness affinity does not grow exponentially with the number of subspaces and their dimensions; (iii) it also exploits the nonlocal information without the need of specifying the size of the neighborhood (
i.e., the number of nearest neighbors as usually used for identifying locally linear subspaces (Yan & Pollefeys, 2006; Zhang et al., 2012)).The assumption of having linear subspaces does not necessarily hold in practical problems. Several works are proposed to tackle the situation where data points do not form linear subspaces but nonlinear ones. Kernel sparse subspace clustering (KSSC) (Patel & Vidal, 2014) and Kernel Lowrank representation (Xiao et al., 2016)
benefit from predefined kernel functions, such as polynomial or Radial Basis Functions (RBF), to cast the problem in highdimensional (possibly infinite) reproducing kernel Hilbert spaces. However, it is still not clear how to choose proper kernel functions for different datasets and there is no guarantee that the feature spaces generated by kernel tricks are wellsuited to linear subspace clustering.
Recently, Deep Subspace Clustering Networks (DSCNet) (Ji et al., 2017b) are introduced to tackle the nonlinearity arising in subspace clustering, where data is nonlinearly mapped to a latent space with convolutional autoencoders and a new selfexpressive layer is introduced between the encoder and decoder to facilitate an endtoend learning of the affinity matrix. Although DSCNet outperforms traditional subspace clustering methods by large, their computational cost and memory footprint can become overwhelming even for midsize problems.
There are a few attempts to tackle the scalability of subspace clustering. The SSCOrthogonal Matching Pursuit (SSCOMP) (You et al., 2016b) replaces the large scale convex optimization procedure with the OMP algorithm to represent the affinity matrix. However, SSCOMP sacrifices the clustering performance in favor of speeding up the computations, and it still may fail when the number of data points is very large. Subspace Clustering Networks (SCN) (Zhang et al., 2018) is proposed to make subspace clustering applicable to large datasets. This is achieved via bypassing the construction of affinity matrix and consequently avoiding spectral clustering, and introducing the iterative method of subspace clustering (Tseng, 2000; Bradley & Mangasarian, 2000) into a deep structure. Although SCN develops two approaches to update the subspace and networks, it still shares the same drawbacks as iterative methods, for instance, it requires a good initialization, and seems fragile to outliers.
2.2 Model fitting
In learning theory, distinguishing outliers and noisy samples from clean ones to facilitate training is an active research topic. For example, Random Sample Consensus (RANSAC) (Fischler & Bolles, 1981) is a classical and wellreceived algorithm for fitting a model to a cloud of points corrupted by noise. Employing RANSAC on subspaces (Yang et al., 2006) in largescale problems does not seem to be the right practice, as RANSAC requires a large number of iterations to achieve an acceptable fit.
Curriculum Learning (Bengio et al., 2009) begins learning a model from easy samples and gradually adapting the model to more complex ones, mimicking the cognitive process of humans. Ensemble Learning (Dietterich, 2000)
tries to improve the performance of machine learning algorithms by training different models and then to aggregate their predictions. Furthermore, distilling the knowledge learned from large deep learning models can be used to supervise a smaller model
(Hinton et al., 2015). Although Curriculum Learning, Ensemble Learning and distilling knowledge are notable methods, adopting them to work on problems with limited annotations, yet aside the unlabeled scenario, is farfrom clear.2.3 Deep Clustering
Many research papers have explored clustering with deep neural networks. Deep Embedded clustering (DEC) (Xie et al., 2016) is one of the pioneers in this area, where the authors propose to pretrain a stacked autoencoder (SAE) (Bengio et al., 2007) and finetune the encoder with a regularizer based on the studentt distribution to achieve clusterfriendly embeddings. On the downside, DEC is sensitive to the network structure and initialization. Various forms of Generative Adversarial Network (GAN) are employed for clustering such as InfoGAN (Chen et al., 2016) and ClusterGAN (Mukherjee et al., 2018), both of which intend to enforce the discriminative feature in the latent space to simultaneously generate and cluster images. The Deep Adaptive image Clustering (DAC) (Chang et al., 2017) uses fully convolutional neural nets (Springenberg et al., 2014)
as initialization to perform selfsupervised learning, and achieves remarkable results on various clustering benchmarks. However, sensitivity to the network structure seems again to be a concern for DAC.
In this paper, we formulate subspace clustering as a binary classification problem through collaborative learning of two modules, one for image classification and the other for subspace affinity learning. Instead of performing spectral clustering on the whole dataset, we train our model in a stochastic manner, leading to a scalable paradigm for subspace clustering.
3 Proposed Method
To design a scalable SC algorithm, our idea is to identify whether a pair of points lies on the same subspace or not. Upon attaining such knowledge (for a largeenough set of pairs), a deep model can optimize its weights to maximize such relationships (lying on subspaces or not). This can be nicely cast as a binary classification problem. However, since groundtruth labels are not available to us, it is not obvious how such a classifier should be built and trained.
In this work, we propose to make use of two confidence maps (see Fig. 1 for a conceptual visualization) as a supervision signal for SC. To be more specific, we make use of the concept of selfexpressiveness to identify positive pairs, i.e., pairs that lie on the same subspaces. To identify negative pairs, pairs that do not belong to same subspaces, we benefit from a negative confidence map. This, as we will show later, is due to the fact that the former can confidently mine positive pairs (with affinity close to 1) while the latter is good at localizing negative pairs (with affinity close to 0). The two confidence maps, not only provide the supervision signal to optimize a deep model, but act collaboratively as partial supervisions for each other.
3.1 Binary Classification
Given a dataset with points from clusters, we aim to train a classifier to predict class labels for data points without using the groundtruth labels. To this end, we propose to use a multiclass classifier which consists of a few convolutional layers (with nonlinear rectifiers) and a softmax output layer. We then convert it to an affinitybased binary classifier by
(1) 
where is a
dimensional prediction vector after
normalization. Ideally, when is onehot, is a binary matrix encoding the confidence of data points belonging to the same cluster. So if we supervise the classifier using , we will end up with a binary classification problem. Also note thatcan be interpreted as the cosine similarity between softmax prediction vectors of
and , which has been widely used in different contexts (Nguyen & Bai, 2010). However, unlike the cosine similarity which lies in , lies within , since the vectors are normalized by softmax and norm. We illustrate this in Fig. 2.3.2 SelfExpressiveness Affinity
Subspace selfexpressiveness can be worded as: one data point drawn from linear subspaces can be represented by a linear combination of other points from the same subspace. Stacking all the points into columns of a data matrix , the selfexpressiveness can be simply described as , where is the coefficient matrix.
It has been shown (e.g., (Ji et al., 2014b)) that by minimizing certain norms of coefficient matrix , a blockdiagonal structure (up to certain permutations) on can be achieved. This translates into
only if data points coming from the same subspace. Therefore, the loss function of learning the affinity matrix can be written as:
(2) 
where denotes a matrix norm. For example, Sparse Subspace Clustering (SSC) (Elhamifar & Vidal, 2009) sticks to the norm, Low Rank Representation (LRR) models (Liu & Yan, 2011; Vidal & Favaro, 2014) pick the nuclear norm, and Efficient Dense Subspace Clustering (Ji et al., 2014b) uses the norm. To handle data corruption, a relaxed version can be derived as:
(3) 
Here, is a weighting parameter balancing the regularization term and the data fidelity term.
To handle subspace nonlinearity, one can employ convolutional autoencoders to nonlinearly map input data to a latent space , and transfer the selfexpressiveness into a linear layer (without nonlinear activation and bias parameters) named selfexpressive layer (Ji et al., 2017b) (see the bottom part of Fig. 1). This enables us to learn the subspace affinity in an endtoend manner using the weight parameters in the selfexpressive layer:
(4) 
where is the maximum absolute value of offdiagonal entries of the current row. Note that then lies within .
3.3 Collaborative Learning
The purpose of collaborative learning is to find a principled way to exploit the advantages of different modules. The classification module and selfexpressive module distill different information in the sense that the former tends to extract more abstract and discriminative features while the latter focuses more on capturing the pairwise correlation between data samples. From our previous discussion, ideally, the subspace affinity is nonzero only if and are from the same subspace, which means that can be used to mine similar pairs (i.e., positive samples). On the other hand, if the classification affinity is close to zero, it indicates strongly that and are dissimilar (i.e., negative sample). Therefore, we carefully design a mechanism to let both modules collaboratively supervise each other.
Given and , we pick up the highconfidence affinities as supervision for training. We illustrate this process in Fig. 1. The “positive confidence” in Fig. 1 denotes the ones from the same class, and the “negative confidence” represents the ones from different classes. As such, we select high affinities from and small affinities from , and formulate the collaborative learning problem as:
(5) 
where the and denote the crossentropy function with sample selection process, which can be defined as follows:
(6) 
and
(7) 
where is the indicator function returning or , are thresholding parameters, and is the entropy function, defined as .
Note that the crossentropy loss is a nonsymmetric metric function, where the former probability serves a supervisor to the latter. Therefore, in Eqn. (
6), the subspace affinity matrix is used as the “teacher” to supervise the classification part (the “student”). Conversely, in Eqn. (7), the classification affinity matrix works as the “teacher” to help the subspace affinity learning module to correct negative samples. However, to better facilitate gradient backpropagation between two modules, we can approximate indicator function by replacing with in Eqn. (6) and with in Eqn. (7). The weight parameter in Eqn. 5, called collaboration rate, controls the contributions of and . It can be set as the ratio of the number of positive confident pairs and the negative confident pairs, or slightly tuned for better performance.3.4 Loss Function
After introducing all the building blocks of this work, we now explain how to jointly organize them in a network and train it with a carefully defined loss function. As shown in Fig. 1, our network is composed of four main parts: (i) a convolutional encoder that maps input data to a latent representation ; (ii) a linear selfexpressive layer which learns the subspace affinity through weights ; (iii) a convolutional decoder that maps the data after selfexpressive layer, i.e., , back to the input space ; (iv) a multiclass classifier that outputs dimensional prediction vectors, with which a classification affinity matrix can be constructed. Our loss function consists of two parts, i.e., collaborative learning loss and subspace learning loss, which can be written as:
(8) 
where denotes the neural network parameters and a weight parameter for the collaborative learning loss. The is the loss to train the affinity matrix through selfexpressive layer. Combining Eqn. (3) and the reconstruction loss of the convolutional autoencoder, we arrive at:
(9) 
where is a function of as defined in (4).
After the training stage, we no longer need to run the decoder and selfexpressive layer to infer the labels. We can directly infer the cluster labels through the classifier output :
(10) 
where is the cluster label of image .
4 Optimization and Training
In this section, we provide more details about how training will be done. Similarly to other autoencoder based clustering methods, we pretrain the autoencoder by minimizing the reconstruction error to get a good initialization of latent space for subspace clustering.
According to (Elhamifar & Vidal, 2009), the solution to formulation (2) is guaranteed to have blockdiagonal structure (up to certain permutations) under the assumption that the subspaces are independent. To account for this, we make sure that the dimensionality of the latent space () is greater than (the subspace intrinsic dimension) (number of clusters) ^{1}^{1}1Note that our algorithm does not require specifying subspace intrinsic dimensions explicitly. Empirically, we found a rough guess of the subspace intrinsic dimension would suffice, e.g., in most cases, we can set it to 9.
. In doing so, we make use of the stride convolution to downsample the images while increasing the number of channels over layers to keep the latent space dimension large. Since we have pretrained the autoencoder, we use a smaller learning rate in the autoencoder when the collaborative learning is performed. Furthermore, compared to DSCNet or other spectral clustering based methods which require to perform sophisticated techniques to post process the affinity matrix, we only need to compute
and normalize it (divided by the largest value in each row and assign to the diagonal entries) to ensure the subspace affinity matrix lie in the same range with the classification affinity matrix.We adopt a threestage training strategy: first, we train the autoencoder together with the selfexpressive layer using the loss in (9) to update the subspace affinity ; second, we train the classifier to minimize Eqn. (5); third, we jointly train the whole network to minimize the loss in (8). All these details are summarized in Algorithm 1.
5 Experiments
We implemented our framework with Tensorflow1.6
(Abadi et al., 2016) on a Nvidia TITAN X GPU. We mainly evaluate our method on three standard datasets, i.e., MNIST, FashionMNIST and the subset of Stanford Online Products dataset. All of these datasets are considered challenging for subspace clustering as it is hard to perform spectral clustering on datasets of this scale, and the linearity assumption is not valid. The number of clusters is set to 10 as input to all competing algorithms. For all the experiments, we pretrain the convolutional autoencoder for 60 epochs with a learning rate and use it as initialization, then decrease the learning rate to in training stage.The hyper parameters in our loss function are easy to tune. in Eqn. (9) controls selfexpressiveness, and it also affects the choice of and in Eqn. (8). If set larger, the coefficient in affinity matrix will be larger, and in that case the should be higher. The other parameter balances the cost of subspace clustering and collaborative learning, and we usually set it to keep these two terms in the same scale to treat them equally. We keep the in all experiments, and slightly change the and for each dataset.
Our method is robust to different network design choices. We test different structures in our framework and get similar results on the same datasets. For MNIST, we use a threelayer convolutional encoder; for FashionMNIST and Stanford online Product, we use a deeper network consisting of three residual blocks (He et al., 2016)
. We do not use batch normalization in our network because it will corrupt the subspace structure that we want to learn in latent space. We use the Rectified Linear Unit (ReLu) as the nonlinear activation in our all experiments.
Since there are no ground truth labels, we choose to use a larger batch size compared with supervised learning to make the training stable and robust. Specifically, we set the batch size to 5000, and use Adam (Kingma & Ba, 2014), an adaptive momentum based gradient descent method to minimize the loss for all our experiments. We set the learning rate to the autoencoder and for other parts in all training stages.
Baseline Methods. We use various clustering methods as our baseline methods including the classic clustering methods, subspace clustering methods, deep clustering methods, and GAN based methods. Specifically, we have the following baselines:

[noitemsep,topsep=0pt]

classic methods: Means (Lloyd, 1982) (KM), Means with our CAEfeature (CAEKM) and SAEfeature (SAEKM);

subspace clustering algorithms: sparse subspace clustering (SSC) (Elhamifar & Vidal, 2013), Low Rank Representation (LRR) (Liu et al., 2013), Kernel Sparse Subspace Clustering (KSSC) (Patel & Vidal, 2014), Deep Subspace Clustering Network (DSCNet) (Ji et al., 2017b), and Subspace Clustering Network (SCN) (Zhang et al., 2018);
Evaluation Metric. For all quantitative evaluations, we make use of the unsupervised clustering accuracy rate, defined as
(11) 
where is the groundtruth label, is the subspace assignment produced by the algorithm, and ranges over all possible onetoone mappings between subspaces and labels. The mappings can be efficiently computed by the Hungarian algorithm. We also use normalized mutual information (NMI) as the additional quantitative standard. NMI scales from 0 to 1, where a smaller value means less correlation between predict label and ground truth label. Another quantitative metric is the adjusted Rand index (ARI), which is scaled between 1 and 1. It computes a similarity between two clusters by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in ground truth and predicted clusters. The larger the ARI, the better the clustering performance.
5.1 Mnist
MNIST consists of handwritten digit images of size
. Subspace nonlinearity arises naturally for MNIST due to the variance of scale, thickness and orientation among all the images of each digit. We thus apply our method on this dataset to see how well it can handle this type of subspace nonlinearity.
In this experiment, we use a threelayer convolutional autoencoder and a selfexpressive layer in between the autoencoder for the subspace affinity learning module. The convolution kernel sizes are and channels are . For the classification module, we connect three more convolutional layers after the encoder layers with kernel size 2, and one convolutional layer with kernel size 1 to output the feature vector. For the threshold parameters and , we set them to and respectively in the first epoch of training, and increase to afterwards. Our algorithm took around 15 mins to finish training on a normal PC with one TITAN X GPU.
We report the clustering results of all competing methods in Table 1. Since spectral clustering based methods (i.e., SSCCAE, LRRCAE, KSSCCAE, DSCNet) can not apply on the whole dateset (due to memory and computation issue), we only use the 10000 samples to show how they perform. As shown in Table 1, subspace algorithms do not perform very well even on 10000 samples. Although the DSCNet is trapped by training the selfexpressive layer, it outperforms other subspace clustering algorithm, which shows the potential of learning subspace structure using neural networks. On the other hand, DEC, DCN, SCN and our algorithm are all based on autoencoder, which learn embeddings with different metrics to help clustering. However, our classification module boost our performance through making the latent space of autoencoder more discriminative. Therefore, our algorithm incorporates the advantage of different classes, e.g., selfexpressivess, nonlinear mapping and discriminativeness, and achieves the best results among all the algorithms thanks to the collaborative learning paradigm.
ACC(%)  NMI(%)  ARI(%)  
CAEKM  51.00  44.87  33.52 
SAEKM  81.29  73.78  67.00 
KM  53.00  50.00  37.00 
DEC  84.30  80.00  75.00 
DCN  83.31  80.86  74.87 
SSCCAE  43.03  56.81  28.58 
LRRCAE  55.18  66.54  40.57 
KSSCCAE  58.48  67.74  49.38 
DSCNet  65.92  73.00  57.09 
SCN  87.14  78.15  75.81 
Ours  94.09  86.12  87.52 
5.2 FashionMNIST
Same as in MINIST, FashionMNIST also has images of size . It consists of various types of fashion products. Unlike MNIST, every class in FashionMNIST has different styles with different gender groups (e.g., men, women, kids and neutral). As shown in Fig. 3, the high similarity between several classes (such as { Pullover, Coat, Shirt}, { Tshirt, Dress }) makes the clustering more difficult. Compared to MNIST, the FashionMNIST clearly poses more challenges for unsupervised clustering.
On FashionMNIST, we employ a network structure with one convolutional layer and three following residual blocks without batch normalization in the encoder, and with a symmetric structure in the decoder. As the complexity of dataset increases, we also raise the dimensionality of ambient space to better suit selfexpressiveness, and increase capacity for the classification module. For all convolutional layers, we keep kernel size as 3 and set the number of channels to 10203040.
We report the clustering results of all methods in Table 2, where we can clearly see that our framework outperforms all the baselines by a large margin including the the bestperforming baseline SCN. Specifically, our method improves over the second one by , and in terms of accuracy, NMI and ARI. We can clearly observe from Fig. 4 that the latent space of our framework, which is collaboratively learned by subspace and classification modules, has strong subspace structure and also keeps each subspace discriminative. For subspace clustering methods we follow the way as on MNIST to use only 10000 samples. DSCNet does not drop a lot while the performance of other subspace clustering algorithms decline sharply campared with their performance on MNIST. Since the code of ClusterGAN is not available currently, we can only provide results from their paper (without reporting ARI).
ACC(%)  NMI(%)  ARI(%)  
SAEKM  54.35  58.53  41.86 
CAEKM  39.84  39.80  25.93 
KM  47.58  51.24  34.86 
DEC  59.00  60.10  44.60 
DCN  58.67  59.4  43.04 
DAC  61.50  63.20  50.20 
ClusterGAN  63.00  64.00   
InfoGAN  61.00  59.00  44.20 
SSCCAE  35.87  18.10  13.46 
LRRCAE  34.48  25.41  10.33 
KSSCCAE  38.17  19.73  14.74 
DSCNet  60.62  61.71  48.20 
SCN  63.78  62.04  48.04 
Ours  72.14  68.60  59.17 
5.3 Stanford Online Products
The Stanford Online Products dataset is designed for supervised metric learning, and it is thus considered to be difficult for unsupervised clustering. Compared to the previous two, the challenging aspects of this dataset include: (i) the product images contain various backgrounds, from pure white to real world environments; (ii) each product has different shapes, colors, scales and view angles; (iii) products across different classes may look similar to each other. To create a manageable dataset for clustering, we manually pick 10 classes out of 12 classes, with around 1000 images per class (10056 images in total), and then rescale them to gray images, as shown Fig. 5.
Our networks for this dataset start from one layer convolutional kernel with 10 channels, and follow with three preactivation residual blocks without batch normalization, which have 20, 30 and 10 channels respectively.
Table 3 shows the performance of all algorithms on this dataset. Due to the high difficulty of this dataset, most deep learning based methods fail to generate reasonable results. For example, DEC and DCN perform even worse than their initialization, and DAC can not selfsupervise their model to achieve a better result. Similarly, infoGAN also fails to find enough clustering pattern. In contrast, our algorithm achieves better results compared to other algorithms, especially the deep learning based algorithms. Our algorithm along with KSSC and DSCNet achieve top results, due to the handling of nonlinearity. Constrained by the size of dataset, our algorithm does not greatly surpass the KSSC and DSCNet. We can easily observe that subspace based clustering algorithms perform better than clustering methods. This illustrates how effective the underlying subspace assumption is in high dimension data space, and it should be considered to be a general tool to help clustering in large scale datasets.
In summary, compared to other deep learning methods, our framework is not sensitive to the architecture of neural networks, as long as the dimensionality meets the requirement of subspace selfexpressiveness. Furthermore, the two modules in our network progressively improve the performance in a collaborative way, which is both effective and efficient.
ACC (%)  NMI (%)  ARI (%)  
DEC  22.89  12.10  3.62 
DCN  21.30  8.40  3.14 
DAC  23.10  9.80  6.15 
InfoGAN  19.76  8.15  3.79 
SSCCAE  12.66  0.73  0.19 
LRRCAE  22.35  17.36  4.04 
KSSCCAE  26.84  15.17  7.48 
DSCNet  26.87  14.56  8.75 
SCN  22.91  16.57  7.27 
Ours  27.5  13.78  7.69 
6 Conclusion
In this work, we have introduced a novel learning paradigm, dubbed collaborative learning, for unsupervised subspace clustering. To this end, we have analyzed the complementary property of the classifierinduced affinities and the subspacebased affinities, and have further proposed a collaborative learning framework to train the network. Our network can be trained in a batchbybatch manner and can directly predict the clustering labels (once trained) without performing spectral clustering. The experiments in our paper have shown that the proposed method outperforms thestateofart algorithms by a large margin on image clustering tasks, which validates the effectiveness of our framework.
References
 Abadi et al. (2016) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. Tensorflow: Largescale machine learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
 Bengio et al. (2007) Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. Greedy layerwise training of deep networks. In NeurIPS, pp. 153–160, 2007.
 Bengio et al. (2009) Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In ICML, pp. 41–48. ACM, 2009.
 Bradley & Mangasarian (2000) Bradley, P. S. and Mangasarian, O. L. Kplane clustering. Journal of Global Optimization, 16(1):23–32, 2000.
 Chang et al. (2017) Chang, J., Wang, L., Meng, G., Xiang, S., and Pan, C. Deep adaptive image clustering. In ICCV, pp. 5880–5888. IEEE, 2017.
 Chen & Lerman (2009) Chen, G. and Lerman, G. Spectral curvature clustering (SCC). IJCV, 81(3):317–330, 2009.
 Chen et al. (2009) Chen, G., Atev, S., and Lerman, G. Kernel spectral curvature clustering (KSCC). In ICCV Workshops, pp. 765–772. IEEE, 2009.
 Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NeurIPS, pp. 2172–2180, 2016.
 Dietterich (2000) Dietterich, T. G. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pp. 1–15. Springer, 2000.
 Elhamifar & Vidal (2009) Elhamifar, E. and Vidal, R. Sparse subspace clustering. In CVPR, pp. 2790–2797, 2009.
 Elhamifar & Vidal (2013) Elhamifar, E. and Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. TPAMI, 35(11):2765–2781, 2013.
 Fischler & Bolles (1981) Fischler, M. A. and Bolles, R. C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
 Gruber & Weiss (2004) Gruber, A. and Weiss, Y. Multibody factorization with uncertainty and missing data using the em algorithm. In CVPR, volume 1, pp. I–I. IEEE, 2004.
 He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
 Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
 Ho et al. (2003) Ho, J., Yang, M.H., Lim, J., Lee, K.C., and Kriegman, D. Clustering appearances of objects under varying illumination conditions. In CVPR, volume 1, pp. 11–18. IEEE, 2003.
 Ji et al. (2014a) Ji, P., Li, H., Salzmann, M., and Dai, Y. Robust motion segmentation with unknown correspondences. In ECCV, pp. 204–219. Springer, 2014a.
 Ji et al. (2014b) Ji, P., Salzmann, M., and Li, H. Efficient dense subspace clustering. In WACV, pp. 461–468. IEEE, 2014b.
 Ji et al. (2015) Ji, P., Salzmann, M., and Li, H. Shape interaction matrix revisited and robustified: Efficient subspace clustering with corrupted and incomplete data. In ICCV, pp. 4687–4695, 2015.
 Ji et al. (2016) Ji, P., Li, H., Salzmann, M., and Zhong, Y. Robust multibody feature tracker: a segmentationfree approach. In CVPR, pp. 3843–3851, 2016.
 Ji et al. (2017a) Ji, P., Reid, I. D., Garg, R., Li, H., and Salzmann, M. Adaptive lowrank kernel subspace clustering. 2017a.
 Ji et al. (2017b) Ji, P., Zhang, T., Li, H., Salzmann, M., and Reid, I. Deep subspace clustering networks. In NeurIPS, pp. 23–32, 2017b.
 Kanatani (2001) Kanatani, K.i. Motion segmentation by subspace separation and model selection. In ICCV, volume 2, pp. 586–591. IEEE, 2001.
 Kingma & Ba (2014) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
 Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), ICML, pp. 1207–1216, Stanford, CA, 2000. Morgan Kaufmann.
 LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

Liu & Yan (2011)
Liu, G. and Yan, S.
Latent lowrank representation for subspace segmentation and feature extraction.
In ICCV, pp. 1615–1622. IEEE, 2011.  Liu et al. (2013) Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., and Ma, Y. Robust recovery of subspace structures by lowrank representation. TPAMI, 35(1):171–184, 2013.
 Lloyd (1982) Lloyd, S. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129–137, 1982.
 Lu et al. (2012) Lu, C.Y., Min, H., Zhao, Z.Q., Zhu, L., Huang, D.S., and Yan, S. Robust and efficient subspace segmentation via least squares regression. In ECCV, pp. 347–360. Springer, 2012.
 Ma et al. (2007) Ma, Y., Derksen, H., Hong, W., and Wright, J. Segmentation of multivariate mixed data via lossy data coding and compression. TPAMI, 29(9), 2007.
 Mo & Draper (2012) Mo, Q. and Draper, B. A. Seminonnegative matrix factorization for motion segmentation with missing data. In ECCV, pp. 402–415. Springer, 2012.
 Mukherjee et al. (2018) Mukherjee, S., Asnani, H., Lin, E., and Kannan, S. Clustergan : Latent space clustering in generative adversarial networks. CoRR, abs/1809.03627, 2018.
 Nguyen & Bai (2010) Nguyen, H. V. and Bai, L. Cosine similarity metric learning for face verification. In ACCV, pp. 709–720. Springer, 2010.
 Ochs et al. (2014) Ochs, P., Malik, J., and Brox, T. Segmentation of moving objects by long term video analysis. TPAMI, 36(6):1187–1200, 2014.
 Oh Song et al. (2016) Oh Song, H., Xiang, Y., Jegelka, S., and Savarese, S. Deep metric learning via lifted structured feature embedding. In CVPR, pp. 4004–4012, 2016.
 Patel & Vidal (2014) Patel, V. M. and Vidal, R. Kernel sparse subspace clustering. In ICIP, pp. 2849–2853. IEEE, 2014.
 Patel et al. (2013) Patel, V. M., Van Nguyen, H., and Vidal, R. Latent space sparse subspace clustering. In ICCV, pp. 225–232, 2013.
 Peng et al. (2016) Peng, X., Xiao, S., Feng, J., Yau, W.Y., and Yi, Z. Deep subspace clustering with sparsity prior. In IJCAI, 2016.
 Purkait et al. (2014) Purkait, P., Chin, T.J., Ackermann, H., and Suter, D. Clustering with hypergraphs: the case for large hyperedges. In ECCV, pp. 672–687. Springer, 2014.
 Springenberg et al. (2014) Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
 Tseng (2000) Tseng, P. Nearest qflat to m points. Journal of Optimization Theory and Applications, 105(1):249–252, 2000.
 Vidal (2011) Vidal, R. Subspace clustering. IEEE Signal Processing Magazine, 28(2):52–68, 2011.
 Vidal & Favaro (2014) Vidal, R. and Favaro, P. Low rank subspace clustering (LRSC). 43:47–61, 2014.
 Wang et al. (2013) Wang, Y.X., Xu, H., and Leng, C. Provable subspace clustering: When LRR meets SSC. In NeurIPS, pp. 64–72, 2013.
 Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
 Xiao et al. (2016) Xiao, S., Tan, M., Xu, D., and Dong, Z. Y. Robust kernel lowrank representation. IEEE transactions on neural networks and learning systems, 27(11):2268–2281, 2016.

Xie et al. (2016)
Xie, J., Girshick, R., and Farhadi, A.
Unsupervised deep embedding for clustering analysis.
In ICML, 2016.  Yan & Pollefeys (2006) Yan, J. and Pollefeys, M. A general framework for motion segmentation: Independent, articulated, rigid, nonrigid, degenerate and nondegenerate. In ECCV, pp. 94–106. Springer, 2006.

Yang et al. (2006)
Yang, A. Y., Rao, S. R., and Ma, Y.
Robust statistical estimation and segmentation of multiple subspaces.
In null, pp. 99. IEEE, 2006.  Yang et al. (2008) Yang, A. Y., Wright, J., Ma, Y., and Sastry, S. S. Unsupervised segmentation of natural images via lossy data compression. CVIU, 110(2):212–225, 2008.

Yang et al. (2017)
Yang, B., Fu, X., Sidiropoulos, N. D., and Hong, M.
Towards kmeansfriendly spaces: Simultaneous deep learning and clustering.
In ICML, volume 70, pp. 3861–3870, 2017.  Yin et al. (2016) Yin, M., Guo, Y., Gao, J., He, Z., and Xie, S. Kernel sparse subspace clustering on symmetric positive definite manifolds. In CVPR, pp. 5157–5164, 2016.
 You et al. (2016a) You, C., Li, C.G., Robinson, D. P., and Vidal, R. Oracle based active set algorithm for scalable elastic net subspace clustering. In CVPR, pp. 3928–3937, 2016a.
 You et al. (2016b) You, C., Robinson, D., and Vidal, R. Scalable sparse subspace clustering by orthogonal matching pursuit. In CVPR, pp. 3918–3927, 2016b.
 Zhang et al. (2012) Zhang, T., Szlam, A., Wang, Y., and Lerman, G. Hybrid linear modeling via local bestfit flats. IJCV, 100(3):217–240, 2012.
 Zhang et al. (2018) Zhang, T., Ji, P., Harandi, M., Hartley, R. I., and Reid, I. D. Scalable deep ksubspace clustering. CoRR, abs/1811.01045, 2018.
Comments
There are no comments yet.