Kernelized LRR on Grassmann Manifolds for Subspace Clustering

Low rank representation (LRR) has recently attracted great interest due to its pleasing efficacy in exploring low-dimensional sub- space structures embedded in data. One of its successful applications is subspace clustering, by which data are clustered according to the subspaces they belong to. In this paper, at a higher level, we intend to cluster subspaces into classes of subspaces. This is naturally described as a clustering problem on Grassmann manifold. The novelty of this paper is to generalize LRR on Euclidean space onto an LRR model on Grassmann manifold in a uniform kernelized LRR framework. The new method has many applications in data analysis in computer vision tasks. The proposed models have been evaluated on a number of practical data analysis applications. The experimental results show that the proposed models outperform a number of state-of-the-art subspace clustering methods.



There are no comments yet.


page 10

page 11

page 12

page 13

page 15


Kernelized Low Rank Representation on Grassmann Manifolds

Low rank representation (LRR) has recently attracted great interest due ...

Tangent-based manifold approximation with locally linear models

In this paper, we consider the problem of manifold approximation with af...

Low Rank Representation on Grassmann Manifolds: An Extrinsic Perspective

Many computer vision algorithms employ subspace models to represent data...

Multiple Flat Projections for Cross-manifold Clustering

Cross-manifold clustering is a hard topic and many traditional clusterin...

Localized LRR on Grassmann Manifolds: An Extrinsic View

Subspace data representation has recently become a common practice in ma...

Transformed Subspace Clustering

Subspace clustering assumes that the data is sepa-rable into separate su...

On a minimum enclosing ball of a collection of linear subspaces

This paper concerns the minimax center of a collection of linear subspac...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, subspace clustering or segmentation has attracted great interest in image analysis, computer vision, pattern recognition and signal processing

[1, 2]. The basic idea of subspace clustering is based on the fact that most data often have intrinsic subspace structures and can be regarded as the samples of a union of multiple subspaces. Thus the main goal of subspace clustering is to group data into different clusters, data points in each of which justly come from one subspace. To investigate and represent the underlying subspace structure, many subspace methods have been proposed, such as the conventional iterative methods [3], the statistical methods [4, 5, 7], the factorization-based algebraic approaches [8, 9]

, and the spectral clustering-based methods

[10, 11, 12, 2, 13]. These methods have been successfully applied in many applications, such as image representation [14], motion segement [8], face classification [15] and saliency detection [13], etc.

Among all the subspace clustering methods aforementioned, the spectral clustering methods based on affinity matrix are considered having good prospects

[2], in which an affinity matrix is firstly learned from the given data and then the final clustering results are obtained by spectral clustering algorithms such as Normalized Cuts (NCut) [16]

or simply the K-means. The key ingredient in a spectral clustering method is to construct a proper affinity matrix for data. In the typical method, Sparse Subspace Clustering (SSC)

[2], one assumes that the data of subspaces are independent and are sparsely represented under the so-called Subspace Detection Property [17], in which the within-class affinities are sparse and the between-class affinities are all zeros. It has been proved that under certain conditions the multiple subspace structures can be exactly recovered via minimization [18].

In most of current sparse subspace methods, one mainly focuses on independent sparse representation for data objects. However, the relation among data objects or the underlying global structure of subspaces that generate the subsets of data to be grouped is usually not well considered, while these intrinsic properties are very important for clustering applications. So some researchers explore these intrinsic properties and relations among data objects and then revise the sparse representation model to represent these properties by introducing extra constraints, such as the low rank constraint [11], the data Laplace consistence regularization [19] and the data sequential property [20]. etc. In these constraints, the holistic constraints such as the low rank or nuclear norm are proposed in favour of structural sparsity. The Low Rank Representation (LRR) model [11]

is one of representatives. The LRR model tries to reveal the latent sparse property embedded in a data set in high dimensional space. It has been proved that, when the high-dimensional data set is actually from a union of several low dimension subspaces, the LRR model can reveal this structure through subspace clustering


Figure 1: An overview of our proposed LRR on Grassmann manifolds. Three steps are involved in the proposed model: (a) The points on Grassmann manifold are mapped onto symmetric matrices. (b) LRR model is formulated in symmetric matrix space. (c) The coefficients in LRR model are used by NCut for clustering.

Although most current subspace clustering methods show good performance in various applications, the similarity among data objects is measured in the original data domain. For example, the current LRR method is based on the principle of data self representation and the representation error is measured in terms of Euclidean alike distance. However, this hypothesis may not be always true for many high-dimensional data in practice where corrupted data may not reside in a linear space nicely. In fact, it has been proved that many high-dimensional data are embedded in low dimensional manifolds. For example, the human face images are considered as samples from a non-linear submanifold [21]. Generally manifolds can be considered as low dimensional smooth ”surfaces” embedded in a higher dimensional Euclidean space. At each point of the manifold, manifold is locally similar to Euclidean space. To effectively cluster these high dimension data, it is desired to reveal the nonlinear manifold structure underlying these high-dimensional data and obtain a proper representation for the data objects.

There are two types of manifold related learning tasks. In the so-called manifold learning, one has to respect the local geometry existed in data but the manifold itself is unknown to learners. The classic representative algorithms for manifold learning include LLE (Locally Linear Embedding) [22], ISOMAP [23], LLP (Locally Linear Projection) [24], LE (Laplacian Embedding) [25], LTSA (Local Tangent Space Alignment) [26] and TKE (Twin Kernel Embedding) [27].

In the other type of learning tasks, we clearly know manifolds where the data come from. For example, in image analysis, people usually use covariance matrices of features as a region descriptor [28]. In this case, such a descriptor is a point on the manifold of symmetrical positive definite matrices. More generally in computer vision, it is common to collect data on a known manifold. For example it is a common practice to use a subspace to represent a set of images [29], while such a subspace is actually a point on the Grassmann manifold [30]. Thus an image set is regarded as a point from the known Grassmann manifold. This type of tasks incorporating manifold properties in learning is called learning on manifolds. There are three major strategies in dealing with learning tasks on manifolds.

  1. Intrinsic Strategy: The ideal but hardest strategy is to intrinsically perform learning tasks on manifolds based on their intrinsic geometry. Very few existing approaches adopt this strategy.

  2. Extrinsic Strategy: The second strategy is to implement a learning algorithm within the tangent spaces of manifolds where all the linear relations can be exploited. In fact, this is a first order approximation to the Intrinsic strategy and most approaches fall in this category.

  3. Embedding Strategy: The third strategy is to embed a manifold into a “larger” Euclidean space by an appropriate mapping like kernel methods and any learning algorithms will be implemented in this “flatten” embedding space. But for a practical learning task, how to incorporate the manifold properties of those known manifolds in kernel mapping design is still a challenging work.

In this paper, we are concerned with the points on a particular known manifold, the Grassmann manifold. We explore the LRR model to be used for clustering a set of data points on Grassmann manifold by adopting the aforementioned third strategy. In fact, Grassmann manifold has a nice property that it can be embedded into the linear space of symmetric matrices [31, 32]. By this way, all the abstract points (subspaces) on Grassmann manifold can be embedded into a Euclidean space where the classic LRR model can be applied. Then an LRR model can be constructed in the embedding space, where the error measure is simply taken as the Euclidean metric in the embedding space. The main idea of our method is illuminated in Fig. 1.

The contributions of this work are listed as follows:

  • Constructing an extended LRR model on Grassmann Manifold based on our prior work in [33];

  • Giving the solutions and practical algorithms to the problems of the extended Grassmann LRR model under different noise models, particularly defined by Frobenius norm and norm;

  • Presenting a new kernelized LRR model on Grassmann manifold.

The rest of the paper is organized as follows. In Section 2, we review some related works. In Section 3, the proposed LRR on Grassmann Manifold (GLRR) is described and the solutions to the GLRR models with different noises assumptions are given in detail. In Section 4, we introduce a general framework for the LRR model on Grassmann manifold from the kernelization point of view. In Section 5, the performance of the proposed methods is evaluated on several public databases. Finally, conclusions and suggestions for future work are provided in Section 6.

2 Related Works

In this section, we briefly review the existing sparse subspace clustering methods including the classic Sparse Subspace Clustering (SSC) and the Low Rank Representation (LRR) and then summarize the properties of Grassmann manifold that are related to the work presented in this paper.

2.1 Sparse Subspace Clustering (SSC)

Given a set of data drawn from a union of unknown subspaces, the task of subspace clustering is to find the number of subspaces and their dimensions and bases, and then segment the data set according to the subspaces. In recent years, sparse representation has been applied to subspace clustering, and the proposed Sparse Subspace Clustering (SSC) aims to find the sparse representation for the data set using regularization [2]. The general SSC can be formulated as follows:


where is a set of signals in dimension and is the correspondent sparse representation of under the dictionary , and represents the error between the signals and its reconstructed values, which is measured by norm , particularly in terms of Euclidean norm, i.e., (or ) denoting the Frobenius norm to deal with the Gaussian noise, or (the Laplacian norm) to deal with the random gross corruptions or to deal with the sample-specific corruptions. Finally is a penalty parameter to balance the sparse term and the reconstruction error.

In the above sparse model, it is critical to use an appropriate dictionary to represent signals. Generally, a dictionary can be learned from some training data by using one of many dictionary learning methods, such as the K-SVD method [34]. However, a dictionary learning procedure is usually time-consuming and so should be done in an offline manner. So many researchers adopt a simple and direct way to use the original signals themselves as the dictionary to find subspaces, which is known as the self-expressiveness property [2], i.e. each data point in a union of subspaces can be efficiently reconstructed by a linear combination of other points in dataset. More specifically, every point in the dataset can be represented as a sparse linear combinations of other points from the same subspace. Mathematically we write this sparse formulation as


From the sparse representation matrix , an affinity matrix can be constructed. For example one commonly used form is . This affinity matrix is interpreted as a graph upon which a clustering algorithm such as NCut is applied for final segmentation. This is the typical approach used in modern subspace clustering techniques.

2.2 Low-Rank Representation (LRR)

The LRR can be regarded as one special type of sparse representation, in which rather than computing the sparse representation of each data point individually, the global structure of data is collectively computed by the low rank representation of a set of data points.

The low rank measurement has long been utilized in matrix completion from corrupted or missing data [35, 36]. Specifically for clustering applications, it has been proved that, when a high-dimensional data set is actually composed of data from a union of several low dimension subspaces, LRR model can reveal the subspaces structure underlying data samples [11]

. It is also proved that LRR has good clustering performance in dealing with the challenges in subspace clustering, such as the unclean data corrupted by noise or outliers, no prior knowledge of the subspace parameters, and lacking of theoretical guarantees for the optimality of clustering methods

[11, 13, 37].

The general LRR model can be formulated as the following optimization problem:


where is the low rank representation of the data set by itself. Here the low rank constraint is achieved by approximating rank with the nuclear norm

, which is defined as the sum of singular values of a matrix and is the low envelop of the rank function of matrices


Although the current LRR method has good performance in subspace clustering, it relies on Euclidean distance for measuring the similarity of the raw data. However, this measurement is not suitable to high-dimensional data with embedding low manifold structure. To characterize the local geometry of data on an unknown manifold, the LapLRR methods [19, 38] uses the graph Laplacian matrix derived from the data objects as a regularized term for the LRR model to represent the nonlinear structure of high dimensional data, while the reconstruction error of the revised model is still computed in Euclidean space.

2.3 Grassmann Manifold

In recent years, Grassmann manifold has attracted great interest in computer vision research community. Although Grassmann manifold itself is quite abstract, it can be well represented as a matrix quotient manifold and its Riemannian geometry has been investigated for algorithmic computation [30].

Grassmann manifold [30] is the space of all -dimensional linear subspaces of for . A point on Grassmann manifold is a -dimensional subspace of which can be represented by any orthonormal basis . The chosen orthonormal basis is called a representative of the subspace . Grassmann manifold has one-to-one correspondence to a quotient manifold of the Stiefel manifold on , see [30].

Grassmann manifold has a nice property that it can be embedded into the space of symmetric matrices via a projection embedding, i.e. we can embed Grassmann manifold into the space of symmetric positive semi-definite matrices by the following mapping, see [32],


The embedding is diffeomorphism [30] (a one-to-one continuous and differentiable mapping with a continuous and differentiable inverse). Then it is reasonable to replace the distance on Grassmann manifold with the following distance defined on the symmetric matrix space under this mapping,


This property was used in subspace analysis, learning and representation [39, 40, 41]. The sparse coding and dictionary learning within the space of symmetric positive definite matrices have been investigated by using kerneling method [32]. For clustering applications, the mean shift method was discussed on Stiefel and Grassmann manifolds in [42]. Recently, a new version of K-means method was proposed to cluster Grassmann points, which is constructed by a statistical modeling method[43]. These works try to expand the clustering methods within Euclidean space to more practical situations on nonlinear spaces. Along with this direction, we further explore the subspace clustering problems on Grassmann manifold and try to establish a novel and feasible LRR model on Grassmann manifold.

3 LRR on Grassmann Manifolds

3.1 LRR on Grassmann Manifolds

In the current LRR model (3), the data reconstruction error is generally computed in the original data domain. For example, the common form of the reconstruction error is Frobenius norm, i.e. the error term can be chosen as follows,


where data matrix .

As mentioned above, many high dimensional data have their intrinsic manifold structures. To extend an LRR model for manifold-valued data, two issues have to be resolved, i.e., (i) model error should be measured in terms of manifold geometry, and (ii) the linear relationship has to be re-interpreted. This is because the linear relation defined by in (3) is no longer valid on a manifold.

In the extrinsic strategy mentioned in Section 1, one gets around this difficulty by using the Log map on a manifold to lift points (data) on a manifold onto the tangent space at a data point. This idea has been applied for clustering and dimensionality reduction on manifolds in [44, 45] and recently for LRR on Stiefel and SPD manifolds [46, 47].

In this paper, instead of using the Log map tool, we extend the LRR model onto Grassmann manifold by using the Embedding Strategy. Given a set of Grassmann points on Grassmann manifold , we mimic the classical LRR defined in (3) and (6) as follows


where , and are only dummy operators to be specified soon and is to measure the error between the point and its “reconstruction” . Thus, to get an LRR model on Grassmann manifold, we should define proper distance and operators for the manifold.

Based on the property of Grassmann manifold in (4), we have an easy way to use the distance of the embedded space to replace the manifold distance in the LRR model on Grassmann manifold as follows,

This error measure not only avoids using Log map operator but also has simple computation with F-norm.

Figure 2:

The GLRR Model. The mapping of the points on Grassmann manifold, the tensor

with each slice being a symmetric matrix can be represented by the linear combination of itself. The element of represents the similarity between slices and .

Additionally, the mapping (4) maps a Grassmann point to a point in the symmetric positive semi-definite matrices space in which there is a linear combination operation if the coefficients are restricted to be positive. So it is intuitive to replace the Grassmann points with its mapped points to implement the combination in (7), i.e.

Furthermore, we stack all the symmetric matrices ’s as front slices of a 3rd order tensor , i.e., , then all the linear relations above can be simply written as , where means the mode-3 multiplication of a tensor and a matrix, see [48]. Thus the self-representation in (3) can be represented by

where is the error tensor. The representation is illustrated in Fig. 2.

In the following subsections, we give two LRR models on Grassmann manifold for two types of noise cases.

3.1.1 LRR on Grassmann Manifold with Gaussian Noise (GLRR-F) [33]

For the completion of this paper, we include our prior work reported in the conference paper [33]. This LRR model on Grassmann manifold, based on the error measurement defined in (5), is defined as follows,


The Frobenius norm here is adopted because of the assumption that the model fits to Gaussian noise. We call this model the Frobenius norm constrained GLRR (GLRR-F). In this case, the error term in (8) is


where is the -th slice of , which is the error between the symmetric matrix and its reconstruction of linear combination .

3.1.2 LRR on Grassmann Manifold with Noise (GLRR-21)

When there exist outliers in the data set, the Gaussian noise model is no longer a favored choice. Therefore, we propose using the so-called noise model, which is used to cope with signal oriented gross errors in LRR clustering applications [11]. Similar to the above GLRR-F model, we formulate the norm constrained GLRR model (GLRR-21) as follows,


where the norm of a tensor is defined as the sum of the Frobenius norm of 3-mode slices as follows:


Note that (11) without squares is different from (9).

3.2 Algorithms for LRR on Grassmann Manifold

The GLRR models in (8) and (10) present two typical optimization problems. In this subsection, we propose appropriate algorithms to solve them.

The GLLR-F model was proposed in our earlier ACCV paper [33] where an algorithm based on ADMM was proposed. In this paper, we provide an even faster closed form solution for (8) and further investigate the tensor structure in these models to obtain a practical solution for (10).

Intuitively, the tensor calculation can be converted to matrix operation by tensorial matricization, see [48]. For example, we can matricize the tensor in mode-3 and obtain a matrix of data points (in rows). So it seems that the problem has been solved using the method of the standard LRR model. However, as the dimension is often too large in practical problems, the existing LRR algorithm could break down. To avoid this matter, we carefully analyze the representation of the construction tensor error terms and convert the optimization problems to its equivalent and readily solvable optimization model. In the following two subsections, we will give the detail of these solutions.

3.2.1 Algorithm for the Frobenius Norm Constrained GLRR Model

We follow the notation used in [33]. By using variable elimination, we can convert problem (8) into the following problem


We note that has a small dimension which is easy to handle. Denote


and the symmetric matrix


Then we have the following Lemma.

Lemma 1.

Given a set of matrices , if with element , then the matrix is semi-positive definite.


Please refer to [33]. ∎

From Lemma 1, we have the eigenvector decomposition for

defined by where and

with nonnegative eigenvalues

. Denote the square root of by then it is not hard to prove that problem (12) is equivalent to the following problem


Finally we have

Theorem 2.

Given that as defined above, the solution to (15) is given by

where is a diagonal matrix with its -th element defined by


Please refer to the proof of Lemma 1 in [49]. ∎

According to Theorem 2, the main cost for solving the LRR on Grassmann manifold problem (8) is (i) computation of the symmetric matrix and (ii) a SVD for . This is a significant improvement to the algorithm presented in [33].

3.2.2 Algorithm for the Norm Constrained GLRR Model

Now we turn to the GLRR-12 problem (10). Because the existence of norm in error term, the objective function is not differentiable but convex. We propose using the alternating direction method (ADM) method to solve this problem.

Firstly, we construct the following augmented Lagrangian function:


where is the standard inner product of two tensors in the same order, is the Lagrange multiplier, and is the penalty parameter.

Specifically, the iteration of ADM for minimizing (16) goes as follows:


where we have used an adaptive parameter . The adaptive rule will be specified later in Algorithm 1.

The above ADM is appealing only if we can find closed form solutions to the subproblems (17) and (18).

Consider problem (17) first. Denote and for any 3-order tensor we use to denote the -th front slice along the 3-mode as a shorten notation. Then we observe that (17) is separable in terms of matrix variable as follows:


From [11], we know that the problem in (20) has a closed form solution, given by


where .

Now consider problem (18). Denote

then problem (18) becomes


We adopt the linearization method to solve the above problem.

For this purpose, we firstly utilize the matrices in each slice to compute the tensor operation in the definition of . For the -th slice of the first term in , we have

Define a new matrix by

then the first term in has the following representation:


For the -th slice of the second term of , we have

Denoting a matrix by

and noting (14), we will have


Combining (23) and (24), we have

Thus we have

Finally we can use the following linearized proximity approximation to replace (22) as follows


with a constant where is the matrix norm of the third mode matricization of the tensor . The new problem (25) has a closed form solution given by, see [50],


where is the SVD of and is the Singular Value Thresholding (SVT) operator defined by

Finally the procedure of solving the norm constrained GLRR problem (10) is summarized in Algorithm 1. For the purpose of the self-completion of the paper, we borrow the convergence analysis for Algorithm 1 from [51] without proof.

Theorem 3.

If is non-decreasing and upper bounded, , then the sequence generated by Algorithm 1 converges to a KKT point of problem (10).

0:  The Grassmann sample set ,, the cluster number and the balancing parameter .
0:  The Low-Rank Representation   
1:  Initialize:, , , , , , and .
2:  Prepare according to (13);
3:  while not converged do
4:     Update according to (21);
5:     Update according to (26);
6:     Update according to (19);
7:     Update according to the following rule:
8:     Check the convergence conditions:
9:  end while
Algorithm 1 Low-Rank Representation on Grassmann Manifold with Norm Constraint.

4 Kernelized LRR on Grassmann Manifold

4.1 Kernels on Grassmann Manifold

In this section, we consider the kernelization of the GLRR-F model. In fact, the LRR model on Grassman manifold (8) can be regarded a kernelized LRR with a kernel feature mapping defined by (4). It is not surprised that is semi-definite positive as it serves as a kernel matrix. It is natural to further generalize the GLRR-F based on kernel functions on Grassmann manifold.

There are a number of Grassmann kernel functions proposed in recent years in computer vision and machine learning communities, see

[52, 41, 31, 53]. For simplicity, we focus on the following kernels:

1. Projection Kernel: This kernel is defined in [41]. For any two Grassmann points and , the kernel value is

The feature mapping of the kernel is actually the mapping defined in (4).

2. Canonical Correlation Kernel: Referring to [41], this kernel is based on the cosine values of the so-called principal angle between two subspaces defined as follows

We can use the largest canonical correlation value (the cosine of the first principal angle) as the kernel value as done in [54], i.e.,

The cosine of principal angles of two subspaces can be calculated by using SVD as discussed in [55], see Theorem 2.1 there.

Consider two subspaces and as two Grassmann points where and are given bases. If we take the following SVD

then the values on the diagonal matrix are the cosine values of all the principal angles. The kernel uses partial information regarding the two subspaces. To increase its performance in our LRR, in this paper, we use the sum of all the diagonal values of as the kernel value between and . We still call this revised version the canonical correlation kernel.

4.2 Kernelized LRR on Grassmann Manifold

Let be any kernel function on Grassmann manifold. According to the kernel theory [56], there exists a feature mapping such that

where is the relevant feature space under the given kernel .

Give a set of points on Grassmann manifold , we define the following LRR model


We call the above model the Kernelized LRR on Grassman manifold, denoted by KGLRR, and KGLRR-cc, KGLRR-p for and , respectively.

However, for KGLRR-p, the above model (27) becomes the LRR model (12). Denote by the kernel matrix over all the data points ’s. By using the similar derivation in [33], we can prove that the model (27) is equivalent to

which is equivalent to


where is the square root matrix of the kernel matrix . So the Kernelized model KGLRR-p is similar to GLRR-F model in Section 3.

It has been proved that using multiple kernel functions improves performance in many application scenarios [56], due to the virtues of different kernel functions for the complex data. So in practice, we can employ different kernel functions to implement the model in (27), even we can adopt a combined kernel function. For example, in our experiments, we use a combination of the above two kernel functions and as follows.

where is a hand assigned combination coefficient. We denote the Kernelized LRR model of by KGLRR-ccp.

4.3 Algorithm for KGLRR

It is straightforward to use Theorem 2 to solve (28). For the sake of convenience, we present the algorithm below.

Let us take the eigenvector decomposition of the kernel matrix

where is the diagonal matrix of all the eigenvalues. Then the solution to (28) is given by

where is the diagonal matrix with elements defined by

This algorithm is valid for any kernel functions on Grassmann manifold.

5 Experiments

To evaluate the performance of the proposed methods, GLRR-21, GLRR-F/KGLRR-p and KGLRR-ccp, we select various public datasets of different types to conduct clustering experiments. These datasets are challenging for clustering applications. We divide these datasets into four types:

The proposed methods will be compared with the benchmark spectral clustering methods, Sparse Subspace Clustering (SSC) [2] and Low-Rank Representation (LRR) [11], and several state-of-the-art clustering methods concerned with manifolds, including Statistical computations on Grassmann and Stiefel manifolds (SCGSM) [43], Sparse Manifold Clustering and Embedding (SMCE) [57] and Latent Space Sparse Subspace Clustering (LS3C) [58]. In the sequel, we first describe the experiment setting, then report and analyze the clustering results on these datasets.

5.1 Experiment Setting

Our GLRR model is designed to cluster Grassmann points, which are subspaces instead of raw object/signal vectors (points). Thus before implementing the main components of GLRR and the spectral clustering algorithm (here we use Ncut algorithm), we must form subspaces from raw signals. Generally, a subspace can be represented by an orthonormal basis, so we utilize the samples drawn from the same subspace to construct its orthonormal basis. Similar to the work in

[41, 59], we simply adopt SVD to construct subspace bases. Concretely, given a set of images, denoted by with each in size of pixels, we construct a matrix of size by vectorizing all the images. Then is decomposed by SVD as . We pick the first () singular-vectors of to represent the entire image set as a point on the Grassmann manifold .

The setting of the model parameters affects the performance of our proposed methods. is the most important penalty parameter for balancing the error term and the low-rank term in our proposed methods. Empirically, the value of in different applications has big gaps, and the best value for has to be chosen from a large range of values to get a better performance in a particular application. From our experiments, we have observed that when the cluster number is increasing, the best is decreasing. Additionally, will be smaller when the noise level in data is lower while will become larger if the noise level higher. These observations are useful in selecting a proper value for different datasets. The error tolerance is also an important parameter in controlling the terminal condition, which bounds the allowed reconstructed error. We experimentally seek a proper value of to make the iteration process stop at an appropriate level of reconstructed error.

For both SSC and LRR methods, which demand the vector form of inputs, the subspace form of points on Grassmann manifold cannot be used directly. So to compare our method with SSC and LRR, we have to vectorize each image set to construct inputs for SSC and LRR, i.e. we ”vectorize” a set of images into a long vector by stacking all the vectors of the raw data in the image set in a particular order, e.g., in the frame order etc. However, in most of the experiments, we cannot simply take these long vectors because of high dimensionality for a larger image set. In this case, we apply PCA to reduce these vectors to a low dimension which equals to either the dimension of subspace of Grassmann manifold or the number of PCA components retaining 95% of its variance energy. Then PCA projected vectors will be taken as inputs for SSC and LRR.

In our experiments, the performance of different algorithms is measured by the following clustering accuracy

To clearly present our experiments, we denote by the number of clusters, the total number of image sets, the number of images in each image set and the dimension of subspace of a Grassmann point.

All the algorithms are coded in Matlab 2014a and implemented on an Intel Core i7-4770K 3.5GHz CPU machine with 32G RAM.

5.2 Clustering on Face and Expression Image Sets

Human face or expression image sets are widely used in computer vision and pattern recognition communities. They are considered as challenging data sets for either clustering or recognition applications. The main difficulty is that the face image is affected greatly by various factors, such as the complex structure of face, the non-rigid elastic deformation of expression, different poses and various light conditions.

1) Extended Yale B dataset

Extended Yale B dataset contains face images of 38 individuals and each subject has about 64 frontal images captured in different illuminations. All the face images have been normalized to the size of pixels in 256 gray levels. Some samples of Extended Yale B dataset are shown in Fig. 3.

Figure 3: Some samples from Extended Yale B dataset (the first two rows) and BU-3DEF dataset (the last two rows).

To prepare the experiment data, we randomly choose images from each subject to construct an image set and is set to 4 or 8 in order to test the affection of different scales of image set for the clustering results. We produce 10 image sets for each subject, so there are totally 380 points for clustering. To get a Grassmann point, we use the aforementioned SVD operator to get the basis of subspace corresponding to each image set. The dimension of subspace . Thus the Grassmann point in this experiment. For SSC and LRR methods, the original vector of an image set has dimension of or for and , respectively. Here, we reduce the dimension to by PCA.

The experiment results are shown in Table I. It shows that most experimental results with are obviously better than that with . In fact, the larger is, the better the performance. When more images in the set, the impact of outlier images such as darkness faces or special expressions will be decreased. However a larger may also increase more variances to be fitted by the subspace. Compared with other manifold based methods, SCGSM, SMCE and LS3C, the excellent performance of our methods is due to the incorporation of low rank constraint over the similarity matrix . Finally we also note that the performance of LRR and SSC is greatly worse than all manifold based methods, which demonstrates incorporating manifold properties can also improve the performance in clustering.

4 0.6154 0.5705 0.8283 0.3209 0.3526 0.4183 0.5978 0.4760
8 0.8878 0.8526 0.8983 0.2788 0.3109 0.5657 0.8429 0.6250
Table I: The clustering results on the Extend Yale B database with different setting of and for the size of image sets.

2) BU-3DEF dataset

BU-3DEF dataset collects 3D face models and face images from 100 subjects of different genders, races and ages. Each subject has 25 face images, one neutral image and six expressions (happiness, disgust, fear, angry, surprise and sadness), each of which is at four levels of intensity. In our experiment, we use these expression images for clustering. They are normalized and centered in a fixed size of . Fig. 3 shows some samples of BU-3DEF dataset.

For each expression, we randomly select images to construct an image set and totally obtain 100 image sets. Then, a Grassmann point is created for each image set by using SVD. There are classes of expressions for clustering. For SSC and LRR, the original vectors with dimension is reduced to dimension by PCA.

Table II presents the experiment results, which show all the methods perform poorly for this challenging dataset. Analyzing this dataset reveals that some images of different expressions from one person only have very little distinction while some images of the same expression from different persons have a strong distinction, which leads to a difficult problem to find a common feature to present the same expression from different persons. It is not surprised that all the methods perform badly over this dataset. Yet, the performance of manifold based methods are superior to other methods, especially our methods produce a clustering accuracy at least 4 percent better than other methods.

6 0.3900 0.3517 0.4033 0.3000 0.2117 0.2583 0.3450 0.3233
Table II: The clustering results on the BU-3DEF human facial expression database.

5.3 Clustering on Large Scale Object Image Sets

Large scale dataset is challenging for clustering methods. When the cluster number of the data is increasing, the performance of many state-of-the-art clustering methods drops dramatically. In this set of experiment, our intention is to test our methods on two large scale object image sets, the Caltech 101 dataset and the Imagenet 2012 dataset.

1) Caltech 101 dataset

Caltech 101 dataset contains pictures of 101 categories objects and each category has about 40 to 800 images. The objects are generally centered in images. All images are grayed and rescaled to a size of . Fig. 4 shows some samples of Caltech 101 dataset.

In each category, we randomly select images to construct an image set. The image sets are then converted to Grassmann points , i.e. . For both SSC and LRR, the subspace vectors with dimension are reduced to by PCA.

To evaluate the robustness of our proposed methods with large cluster numbers, we test the cases of and . Table III shows the experiment results for different methods. As can be seen from this table, in all cases, our methods outperform other state-of-the-art methods at least 10 percent and are stable with the increase of cluster numbers. It reveals that when the number of clusters is higher, GLRR-F is slightly better than KGLRR-ccp. It is worth noting that our methods are also robust to complex backgrounds contained in Caltech 101 images.

10 0.8726 0.6371 0.9114 0.5360 0.6066 0.5928 0.6108 0.7230
20 0.7627 0.5144 0.8126 0.4290 0.3869 0.4035 0.4246 0.5909
30 0.6664 0.4294 0.6995 0.3014 0.3318 0.4550 0.3810 0.5043
40 0.6519 0.3654 0.6387 0.1687 0.2881 0.3679 0.3737 0.4617
50 0.5913 0.3395 0.5884 0.1451 0.2614 0.3137 0.3608 0.4138
Table III: The clustering results on the Celtech 101 database.

2) ImageNet 2012 dataset

ImageNet 2012 dataset is a wildly used object database for image retrieve, which contains more than million object images from over 1000 categories. This database is more difficult for clustering due to the large scale of categories. In addition, most objects are small and un-centered in images, even many objects have severe occlusions. We extract the region of interest from images by the bounding boxes defined in and resize the region to . Some samples of ImageNet are shown in Fig. 4.

Figure 4: Some samples of Caltech 101 dataset (the first two rows) and ImageNet 2012 dataset(the last two rows).

Many categories in this dataset are quite similar to each other, and the orientation and scale of objects in images change largely, so it is hard for an algorithm to get high accuracy for classification or clustering. To test the performance of our methods in the existence of these varieties, we only set a mild value of in this experiments, i.e. we randomly select 10 classes of objects. In each class, Grassman points on are constructed by using SVD for image sets, each of which contains randomly selected images, and a total of 1284 image sets are obtained. For both SSC and LRR, the subspace vector with dimension is reduced to dimension by PCA.

The clustering experiments are repeated 10 times and the mean clustering accuracy and the variances are reported as the final results. As shown in Table IV, all the comparing methods failed to produce meaningful results, although our methods manage to give the best results among them. There are several factors to make this dataset a great challenge for clustering. First, this dataset collects a larger volume of images with huge number of classes. Second, different categories are more similar to each other than that in Caltech 101 dataset. Third, many objects in this dataset are very small and not well aligned. Finally, the objects in this dataset have complex backgrounds or with occlusions.

mean 0.3100 0.2901 0.3118 0.2243 0.1721 0.2103 0.2812 0.2595
variance 2.43e-04 8.00e-05 2.13e-04 2.90e-04 4.15e-05 2.48e-04 3.63e-04 3.80e-04
Table IV: The clustering results on the ImageNet database.

5.4 Clustering on Human Action Datasets

Human action classification and recognition is an open and hot issue in computer vision literature. Most human action data are in the form of video clips, which is a set of sequential frame images and suitable to our subspace representation methods. So we select two human action datasets, the Ballet dataset and the SKIG dataset, to test our clustering methods.

1) Ballet dataset

The Ballet dataset contains 44 video clips collected from an instructional ballet DVD. Each clip has 107 to 506 frames. The dataset consists of 8 complex action patterns performed by 3 subjects. These actions include: ‘left-to-right hand opening’, ‘right-to-left hand opening’, ‘standing hand opening’, ‘leg swinging’, ‘jumping’, ‘turning’, ‘hopping’ and ‘standing still’. The dataset is challenging due to the significant intra-class variations in terms of speed, spatial and temporal scale, clothing and movement. The frame images are normalized and centered in a fixed size of . Some frame samples of Ballet dataset are shown in Fig. 5.

Similar to the method constructing image sets in [60], we split each clip into sections of frames to form the image sets. We totally obtain image sets. The cluster number and the dimension of subspace is set . Thus we construct Grassmann points for clustering. For SSC and LRR methods, the subspace vectors in size of is reduced to dimension by PCA.

The results of different clustering methods are shown in Table V. The ballet images do not have much complex background, thus they can be regarded as clear data without noise. Additionally, the images in each image set have time sequential relations and each action consists of several simple actions. So these helpfully improve the performance of the clustering methods, as shown in Table V. Our methods, SCGSM and SMCE out-stand other methods at least 24 percent, which reflects the advantage of the manifold based methods.

2) SKIG dataset

SKIG dataset contains 1080 RGB-D sequences captured by a Kinect sensor. Each RGB-D sequence contains 63 to 605 frames. These sequences have ten kinds of gestures, ‘circle’, ‘triangle’, ‘up-down’, ‘right-left’, ‘wave’, ’Z’, ‘cross’, ‘come-here’, ‘turn-around’, and ‘pat’, performed by six persons. All the gestures are performed by fist, finger and elbow, respectively, under three backgrounds and two illuminations. Here the images are normalized to with mean zero and unit variance. Fig. 5 presents some samples of SKIG dataset.

Figure 5: Some samples of Ballet dataset (the first two rows) and SKIG dataset (the last two rows).

We regard each RGB video as an image set and obtain totally image sets to cluster into classes. The dimension of subspaces is set and thus the Grassmann points on are constructed. Since there is a big gap between 63 to 605 frames among SKIG sequences and PCA algorithm requires each sample has equal dimension, it is difficult to select same number of frames for each sequence as the inputs for SSC and LRR. Thus, we give up comparing our methods with SSC and LRR.

This dataset also has more challenges than the ballet dataset, due to the smaller scale of the objects, the various backgrounds and illuminations, however the experimental results in Table V show that our GLRR-F is better than other methods at least 5 percent.

MethodsDatasets Ballet SKIG
FGLRR 0.5727 0.5185
GLRR-21 0.6122 0.4907
KGLRR-ccp 0.6253 0.5333
LRR 0.2895 -
SSC 0.3089 -
SCGSM 0.5429 0.4667
SMCE 0.5616 0.4130
LS3C 0.2278 0.3722
Table V: The clustering results on the Ballet dataset and SKIG dataset.

5.5 Clustering on Traffic Scene Video Clips Sets

The above two human action datasets are considered as having relatively simple scenes with limited backgrounds. To demonstrate robustness of the our algorithms to complex backgrounds, we further test our methods on real traffic scene video clips datasets, including the Highway Traffic Dataset and a traffic video dataset we collected.

1) Highway Traffic Dataset

Highway Traffic Dataset contains 253 video sequences of highway traffic captured under various weather conditions, such as sunny, cloudy and rainy. These sequences are labeled with three traffic levels: light, medium and heavy. There are 44 clips of heavy level, 45 clips of medium level and 164 clips of light level. Each video sequence has 42 to 52 frames. The video sequences are converted to grey images and each image is normalized to size with mean zero and unit variance. Some samples of the Highway traffic dataset are shown in Fig. 6

We regard each video sequence as an image set to construct a point on Grassmann manifold in the similar way used in the above experiments. The subspace dimension is set to and the number clusters equals to the number of traffic levels, i.e. . For SSC and LRR, we vectorize the first 42 frames of each clip and then use PCA to reduce the dimension to 147. Note that there is no clear cut between different levels of traffic jams. For some clips, it is difficult to say whether they belong to heavy, medium or light level. So it is indeed a great challenging task for clustering methods.

Table VI presents the clustering performance of all the methods on the Traffic dataset. Because the number of traffic level is only 3, all experimental results seem meaningful and for the worst case the accuracy is 0.5138. GLRR-F’s accuracy is 0.8063, which is at least 15 percent higher than other methods. In addition, KGLRR-ccp further improves the clustering accuracy to 0.8221 by the benefit of using kernel methods. An interesting phenomenon is that Euclidean based methods (SSC and LRR) outperform some manifold based methods (SCGSM, SMCE). The reason may be that all frames in a video are similar, in other words, a traffic level can be judged by simply using images, so Euclidean representation could be a better choice.

2) Our Road Traffic Dataset

Our road traffic dataset is derived from a total of 300 video clips which were collected from Road detectors used in Beijing. These clips are also labeled as three traffic levels: light, medium and heavy. This database has more scene changes such as ‘sunny’, ‘cloudy’, ‘heavy rainy’ and ‘darkness’. Each clip consists of over 50 frames. We convert these clips to gray images and resize them to . Fig. 6 shows some samples from this dataset.

Figure 6: Some samples of the Highway traffic dataset (the first three rows) and our Road Traffic Dataset (the last three rows).

We segment 50 frames from each clip and represent it as an image set. So we have clips for each traffic level and totally 300 clips for classes. The image sets are also represented as Grassmann points . For SSC and LRR methods, the subspace vector with dimension is reduced to dimension by PCA.

The experimental results are listed in Table VI. Though the environment in this database is more complex than that in the above traffic database, the accuracy of our methods are obviously at least 4 percent higher than other methods. Once again the experiment on this dataset shows that the Grassmann based methods are more appropriate than other methods for this type of data.

MethodsDatasets Highway Traffic Road Traffic
GLRR-F 0.8063 0.6778
GLRR-21 0.5415 0.6667
KGLRR-ccp 0.8221 0.6778
LRR 0.6838 0.4911
SSC 0.6285 0.6678
SCGSM 0.6087 0.4767
SMCE 0.5138 0.6656
LS3C 0.6561 0.4433

Table VI: The clustering results on the Highway Traffic dataset and our Road Traffic dataset.

6 Conclusion and Future Work

In this paper, we proposed novel LRR models on Grassmann manifold by the embedding strategy to construct a metric in terms of Euclidean measure. Two models, GLRR-F and GLRR-21, were proposed to deal with Gaussian noise and non-Gaussian outliers, respectively. A closed-form solution to GLRR-F was presented while an ADMM algorithm was also proposed for GLRR-21. In addition, the LRR model on Grassmann manifold was generalized to its kernelized version under the kernel framework. The proposed models and algorithms were evaluated on several public databases against state-of-the-art clustering algorithms. The experimental results show that the proposed methods outperform the state-of-the-art methods and behave robustly to various change sources. The work has demonstrated that incorporating geometrical property of manifolds via embedding mapping actually facilitates learning on manifold. In the future work, we will focus on the exploring the intrinsic property of Grassmann manifold to construct LRR.


The research project is supported by the Australian Research Council (ARC) through the grant DP130100364 and also partially supported by National Natural Science Foundation of China under Grant No. 61390510, 61133003, 61370119, 61171169, 61300065 and Beijing Natural Science Foundation No. 4132013.


  • [1] R. Xu and D. Wunsch-II, “Survey of clustering algorithms,”

    IEEE Transactions on Neural Networks

    , vol. 16, no. 2, pp. 645–678, 2005.
  • [2] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, Theory, and Applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 2765–2781, 2013.
  • [3] P. Tseng, “Nearest -flat to points,” Journal of Optimization Theory and Applications, vol. 105, no. 1, pp. 249–252, 2000.
  • [4] A. Gruber and Y. Weiss, “Multibody factorization with uncertainty and missing data using the EM algorithm,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. I, 2004, pp. 707–714.
  • [5] M. Tipping and C. Bishop, “Mixtures of probabilistic principal component analyzers,” Neural Computation, vol. 11, no. 2, pp. 443–482, 1999.
  • [6]

    C. F. Chen, C. P. Wei, and Y. C. F. Wang, “Low-rank matrix recovery with structural incoherence for robust face recognition,” in

    IEEE Conference on Computer Vision and Pattern Recognition, 2012.
  • [7] J. Ho, M. H. Yang, J. Lim, K. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2003, pp. 11–18.
  • [8] K. Kanatani, “Motion segmentation by subspace separation and model selection,” in IEEE International Conference on Computer Vision, vol. 2, 2001, pp. 586–591.
  • [9]

    Y. Ma, A. Yang, H. Derksen, and R. Fossum, “Estimation of subspace arrangements with applications in modeling and segmenting mixed data,”

    SIAM Review, vol. 50, no. 3, pp. 413–458, 2008.
  • [10] U. von Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, 2007.
  • [11] G. Liu, Z. Lin, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
  • [12] G. Chen and G. Lerman, “Spectral curvature clustering,” International Journal of Computer Vision, vol. 81, no. 3, pp. 317–330, 2009.
  • [13] C. Lang, G. Liu, J. Yu, and S. Yan, “Saliency detection by multitask sparsity pursuit,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 1327–1338, 2012.
  • [14] W. Hong, J. Wright, K. Huang, and Y. Ma, “Multi-scale hybrid linear models for lossy image representation,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3655–3671, 2006.
  • [15]

    G. Liu and S. Yan, “Latent low-rank representation for subspace segmentation and feature extraction,” in

    IEEE International Conference on Computer Vision, 2011, pp. 1615–1622.
  • [16] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 888–905, 2000.
  • [17] D. Donoho, “For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution,” Comm. Pure and Applied Math., vol. 59, pp. 797–829, 2004.
  • [18] G. Lerman and T. Zhang, “Robust recovery of multiple subspaces by geometric minimization,” The Annuals of Statistics, vol. 39, no. 5, pp. 2686–2715, 2011.
  • [19] J. Liu, Y. Chen, J. Zhang, and Z. Xu, “Enhancing low-rank subspace clustering by manifold regularization,” IEEE Transactions on Image Processing, vol. 23, no. 9, pp. 4022–4030, 2014.
  • [20] S. Tierney, J. Gao, and Y. Guo, “Subspace clustering for sequential data,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1019–1026.
  • [21] R. Wang, S. Shan, X. Chen, and W. Gao, “Manifold-manifold distance with application to face recognition based on image set,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
  • [22] S. Roweis and L. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 1, pp. 2323–2326, 2000.
  • [23] J. Tenenbaum, V. Silva, and J. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Optimization Methods and Software, vol. 290, no. 1, pp. 2319–2323, 2000.
  • [24] X. He and P. Niyogi, “Locality preserving projections,” in Advances in Neural Information Processing Systems, vol. 16, 2003.
  • [25] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Advances in Neural Information Processing Systems, vol. 14, 2001.
  • [26] Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimension reduction via local tangent space alignment,” SIAM Journal of Scientific Computing, vol. 26, no. 1, pp. 313–338, 2005.
  • [27] Y. Guo, J. Gao, and P. Kwan, “Twin kernel embedding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 1490–1495, 2008.
  • [28] O. Tuzel, F. Porikli, and P. Meer, “Region covariance: A fast descriptor for detection and classification,” European Conference on Computer Vision, vol. 3952, pp. 589–600, 2006.
  • [29] P. K. Turaga, A. Veeraraghavan, and R. Chellappa, “Statistical analysis on Stiefel and Grassmann manifolds with applications in computer vision,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
  • [30] P. A. Absil, R. Mahony, and R. Sepulchre, “Riemannian geometry of Grassmann manifolds with a view on algorithmic computation,” Acta Applicadae Mathematicae, vol. 80, no. 2, pp. 199–220, 2004.
  • [31] M. T. Harandi, M. Salzmann, S. Jayasumana, R. Hartley, and H. Li, “Expanding the family of Grassmannian kernels: An embedding perspective,” in European Conference on Computer Vision, vol. 8695, 2014, pp. 408–423.
  • [32] M. T. Harandi, C. Sanderson, C. Shen, and B. Lovell, “Dictionary learning and sparse coding on Grassmann manifolds: An extrinsic solution,” in International Conference on Computer Vision, 2013, pp. 3120–3127.
  • [33] B. Wang, Y. Hu, J. Gao, Y. Sun, and B. Yin, “Low rank representation on Grassmann manifolds,” in Asian Conference on Computer Vision, 2014.
  • [34] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 1, pp. 4311–4322, 2006.
  • [35]

    E. J. Candés, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?”

    Journal of the ACM, vol. 58, no. 3, pp. 1–37, 2011.
  • [36] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,” in Advances in Neural Information Processing Systems, vol. 22, 2009.
  • [37] B. Cheng, G. Liu, J. Wang, Z. Huang, and S. Yan, “Multi-task low-rank affinity pursuit for image segmentation,” in International Conference on Computer Vision, 2011, pp. 2439–2446.
  • [38] M. Yin, J. Gao, and Z. Lin, “Laplacian regularized low-rank representation and its applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. XX, p. accepted, 2015.
  • [39] J. Hamm and D. Lee, “Grassmann discriminant analysis: a unifying view on sub-space-based learning,” in International Conference on Machine Learning, 2008, pp. 376–383.
  • [40] A. Srivastava and E. Klassen, “Bayesian and geometric subspace tracking,”

    Advances in Applied Probability

    , vol. 36, no. 1, pp. 43–56, 2004.
  • [41] M. T. Harandi, C. Sanderson, S. A. Shirazi, and B. C. Lovell, “Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 2705–2712.
  • [42] H. Cetingul and R. Vidal, “Intrinsic mean shift for clustering on Stiefel and Grassmann manifolds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1896–1902.
  • [43] P. Turaga, A. Veeraraghavan, A. Srivastava, and R. Chellappa, “Statistical computations on Grassmann and Stiefel manifolds for image and video-based recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2273–2286, 2011.
  • [44] A. Goh and R. Vidal, “Clustering and dimensionality reduction on Riemannian manifolds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–7.
  • [45] B. Wang, Y. Hu, J. Gao, Y. Sun, and B. Yin, “Low rank representation on grassmann manifolds: An extrinsic perspective,” CoRR, vol. abs/1504.01807, 2015. [Online]. Available:
  • [46] M. Yin, J. Gao, and Z. Lin, “A nonlinear low-rank representation on Stiefel manifold,” Electronics Letters, vol. 51, no. 10, pp. 749–751, 2015.
  • [47] Y. Fu, J. Gao, X. Hong, and D. Tien, “Low rank representation on Riemannian manifold of symmetrical positive definite matrices,” in SIAM Conferences on Data Mining (SDM), 2015, pp. 316–324.
  • [48] G. Kolda and B. Bader, “Tensor decomposition and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.
  • [49] P. Favaro, R. Vidal, and A. Ravichandran, “A closed form solution to robust subspace estimation and clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1801–1807.
  • [50] J. F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. on Optimization, vol. 20, no. 4, pp. 1956–1982, 2008.
  • [51] Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low rank representation,” in Advances in Neural Information Processing Systems, vol. 23, 2011.
  • [52] L. Wolf and A. Shashua, “Learning over sets using kernel principal angles,” Journal of Machine Learning Research, vol. 4, pp. 913–931, 2003.
  • [53] S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi, “Optimizing over radial kernels on compact manifolds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3802 – 3809.
  • [54] O. Yamaguchi, K. Fukui, and K. Maeda, “Face recognition using temporal image sequence,” in Automatic Face and Gesture Recognition, 1998, pp. 318–323.
  • [55] A. Björck and G. H. Golub, “Numerical methods for computing angles between linear subspaces,” Mathematics of Computation, vol. 27, pp. 579–594, 1973.
  • [56] B. Schölkopf and A. Smola, Learning with Kernels.   Cambridge, Massachusetts: The MIT Press, 2002.
  • [57] E. Elhamifar and R. Vidal, “Sparse manifold clustering and embedding.”   Advances in Neural Information Processing Systems, 2011.
  • [58] V. M. Patel, H. V. Nguyen, and R. Vidal, “Latent space sparse subspace clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 691 – 701.
  • [59] M. Harandi, C. Sanderson, R. Hartley, and B. Lovell, “Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach,” in European Conference on Computer Vision, 2012, pp. 216–229.
  • [60] S. Shirazi, M. Harandi, C. Sanderson, A. Alavi, and B. Lovell, “Clustering on grassmann manifolds via kernel embedding with application to action analysis,” in IEEE International Conference on Image Processing, 2012, pp. 781–784.