From Manifold to Manifold: Geometry-Aware Dimensionality Reduction for SPD Matrices

07/04/2014 ∙ by Mehrtash T. Harandi, et al. ∙ 0

Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices --especially of high-dimensional ones-- comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper introduces an approach to embedding the Riemannian structure of Symmetric Positive Definite (SPD) matrices into a lower-dimensional, more discriminative Riemannian manifold. SPD matrices are becoming increasingly pervasive in various domains. For instance, diffusion tensors naturally arise in medical imaging 

[17]

. In computer vision, SPD matrices have been shown to provide powerful representations for images and videos via region covariances 

[21]. Such representations have been successfully employed to categorize textures [21, 6], pedestrians [22], faces [16, 6], actions and gestures [19].

Figure 1: Conceptual comparison of typical dimensionality reduction methods on the manifold [4, 23] and our approach. Top row (existing techniques): The original manifold (a) is first flattened either via tangent space computation or by Hilbert space embedding. The flattened manifold (b) is then mapped to a lower-dimensional, optionally more discriminative space (c). The distortion incurred by the initial flattening may typically make this mapping more complicated. Bottom row (our approach): The original manifold (d) is directly transformed to a lower-dimensional, more discriminative manifold (e).

SPD matrices can be thought of as an extension of positive numbers and form the interior of the positive semidefinite cone. It is possible to directly employ the Frobenius norm as a similarity measure between SPD matrices, hence analyzing problems involving such matrices via Euclidean geometry. However, as several studies have shown, undesirable phenomena may occur when Euclidean geometry is utilized to manipulate SPD matrices [17, 22, 9]. One example of this is the swelling effect that occurs in diffusion tensor imaging (DTI), where a matrix represents the covariance of the local Brownian motion of water molecules [17]

: When considering Euclidean geometry to interpolate between two diffusion tensors, the determinant of the intermediate matrices may become strictly larger than the determinants of both original matrices, which is a physically unacceptable behavior. In 

[17], a Riemannian structure for SPD matrices was introduced to overcome the drawbacks of the Euclidean representation. This Riemannian structure is induced by the Affine Invariant Riemmanian Metric (AIRM), and is referred to as the SPD or tensor manifold.

As shown in several studies [17, 22, 6, 9], accounting for the geometry of SPD manifolds can have a highly beneficial impact. However, it also leads to challenges in developing effective and efficient inference methods. The main trends in analyzing SPD manifolds are to either locally flatten them via tangent space approximations [22, 19], or embed them in higher-dimensional Euclidean spaces [6, 2, 9]. In both cases, the computational cost of the resulting methods increases dramatically with the dimension of the SPD matrices. As a consequence, very low-dimensional SPD matrices are typically employed (e.g., region covariance descriptors obtained from a few low-dimensional features), with the exception of a few studies where medium-size matrices were used [16, 6]. While the matrices obtained from low-dimensional features have proven sufficient for specific problems, they are bound to be less powerful and discriminative than the high-dimensional features typically used in computer vision.

To overcome this limitation, here, we introduce an approach that lets us handle high-dimensional SPD matrices. In particular, from a high-dimensional SPD manifold, we construct a lower-dimensional, more discriminative SPD manifold. While some manifold-based dimensionality reduction techniques have been proposed [4, 23], as illustrated in Fig. 1, they typically yield a Euclidean representation of the data and rely on flattening the manifold, which incurs distortions. In contrast, our approach directly works on the original manifold and exploits its geometry to learn a representation that (i) still benefits from useful properties of SPD manifolds, and (ii) can be used in conjunction with existing manifold-based recognition techniques to make them more practical and effective.

More specifically, given training SPD matrices, we search for a projection from their high-dimensional SPD manifold to a low-dimensional one such that the resulting representation maximizes an affinity-weighted similarity between pairs of matrices. In particular, we exploit the class labels to define an affinity measure, and employ either the AIRM, or the Stein divergence [20] to encode the similarity between two SPD matrices. Due to the affine invariance property of the AIRM and of the Stein divergence, any full rank projection would yield an equivalent representation. This allows us, without loss of generality, to model the projection with an orthonormal matrix, and thus express learning as an unconstrained optimization problem on a Grassmann manifold, which can be effectively optimized using a conjugate gradient method on the manifold.

We demonstrate the benefits of our approach on several tasks where the data can be represented with high-dimensional SPD matrices. In particular, our method outperforms state-of-the-art techniques on three classification tasks: image-based material categorization and face recognition, and action recognition from 3D motion capture sequences. A Matlab implementation of our algorithm is available from the first author’s webpage.

2 Related Work

We now discuss in more details the three techniques that also tackle dimensionality reduction of manifold-valued data.

Principal Geodesic Analysis (PGA) was introduced in [4]

as a generalization of Principal Component Analysis (PCA) to Riemannian manifolds. PGA identifies the tangent space whose corresponding subspace maximizes the variability of the data on the manifold. PGA, however, is equivalent to flattening the Riemannian manifold by taking its tangent space at the Karcher, or Fréchet, mean of the data. As such, it does not fully exploit the structure of the manifold. Furthermore, PGA, as PCA, cannot exploit the availability of class labels, and may therefore be sub-optimal for classification.

In [23], the Covariance Discriminative Learning (CDL) algorithm was proposed to embed the SPD manifold into a Euclidean space. In contrast to PGA, CDL utilizes class labels to learn a discriminative subspace using Partial Least Squares (PLS) or Linear Discriminant Analysis (LDA). However, CDL relies on mapping the SPD manifold to the space of symmetric matrices via the principal matrix logarithm. While this embedding has some nice properties (e.g

., diffeomorphism), it can also be thought of as embedding the SPD manifold into its tangent space at the identity matrix. Therefore, although supervised, CDL also exploits data potentially distorted by the use of a single tangent space, as PGA.

Finally, in [5], several Nonlinear Dimensionality Reduction techniques were extended to their Riemannian counterparts. This was achieved by introducing various Riemannian geometry concepts, such as Karcher mean, tangent spaces and geodesics, in Locally Linear Embedding (LLE), Hessian LLE and Laplacian Eigenmaps. The resulting algorithms were applied to several unsupervised clustering tasks. Although these methods can, in principle, be employed for supervised classification, they are limited to the transductive setting since they do not define any parametric mapping to the low-dimensional space.

In this paper, we learn a mapping from a high-dimensional SPD manifold to a lower-dimensional one without relying on tangent space approximations of the manifold. Our approach therefore accounts for the structure of the manifold and can simultaneously exploit class label information. The resulting mapping lets us effectively handle high-dimensional SPD matrices for classification purposes. Furthermore, by mapping to another SPD manifold, our approach can serve as a pre-processing step to other Riemannian-based approaches, such as the manifold sparse coding of [6], thus making them practical to work with more realistic, high-dimensional features. Note that, while our formulation is inspired from graph embedding methods in Euclidean spaces, e.g., [25], here we work with data lying on more challenging non-linear manifolds.

To the best of our knowledge, this is the first work that shows how a high-dimensional SPD manifold can be transformed into another SPD manifold with lower intrinsic dimension. Note that a related idea, but with a very different approach, was introduced in [10] to decompose high-dimensional spheres into submanifolds of decreasing dimensionality.

3 Riemannian Geometry of SPD Manifolds

In this section, we discuss some notions of geometry of SPD manifolds. Throughout this paper we will use the following notation: is the space of real SPD matrices; is the identity matrix; is the general linear group, i.e., the group of real invertible matrices.

Definition 1

A real and symmetric matrix is said to be SPD if is positive for any non-zero .

The space of

SPD matrices is obviously not a vector space since multiplying an SPD matrix by a negative scalar results in a matrix which does not belong to

. Instead, forms the interior of a convex cone in the -dimensional Euclidean space. The space is mostly studied when endowed with a Riemannian metric and thus forms a Riemannian manifold [17]. A natural way to measure closeness on a manifold is by considering the geodesic distance between two points on the manifold. Such a distance is defined as the length of the shortest curve connecting the two points. The shortest curves are known as geodesics and are analogous to straight lines in

. The Affine Invariant Riemannian Metric (AIRM) is probably the most popular Riemannian structure for analyzing SPD matrices 

[17]. Let be a point on . The AIRM for two tangent vectors is defined as

(1)
Definition 2

The geodesic distance induced by the AIRM is defined as

(2)

where is the matrix principal logarithm.

More recently, Sra introduced the Stein metric on SPD manifolds [20]:

Definition 3

The Stein metric is a symmetric type of Bregman divergence and is defined as

(3)

The Stein metric shows several similarities to the geodesic induced by the AIRM while being less expensive to compute [3]. In addition to the properties studied by Sra [20], we provide the following important theorem which relates the length of curves under the two metrics.

Theorem 3.1

The length of any given curve is the same under and up to a scale of .

Proof

Given in appendix 0.A. ∎

One of the motivations for projecting a higher-dimensional SPD manifold to a lower-dimensional one is to preserve the properties of and  [17, 20]. One important such property, especially in computer vision, is affine invariance [17].

Property 1

(Affine invariance). For any ,

This property postulates that the metric between two SPD matrices is unaffected by the action of the affine group. In the specific case where the SPD matrices are region covariance descriptors [21], this implies that the distance between two descriptors will remain unchanged after an affine transformation of the image features, such as a change of illumination when using RGB values. Note that, in addition to this specific implication, we will also exploit the affine invariance property for a different purpose when deriving our learning algorithm in the next section.

4 Geometry-Aware Dimensionality Reduction

We now describe our approach to learning an embedding of high-dimensional SPD matrices to a more discriminative, low-dimensional SPD manifold. More specifically, given a matrix , we seek to learn the parameters , , of a generic mapping , which we define as

(4)

Clearly, if and has full rank, .

Given a set of SPD matrices , where each matrix , our goal is to find a transformation

such that the resulting low-dimensional SPD manifold preserves some interesting structure of the original data. Here, we encode this structure via an undirected graph defined by a real symmetric affinity matrix

. The element of this matrix measures some notion of affinity between matrices and , and may be negative. We will discuss the affinity matrix in more details in Section 4.2.

Given , we search for an embedding such that the affinity between pairs of SPD matrices is reflected by a measure of similarity on the low-dimensional SPD manifold. In this paper, we propose to make use of either the AIRM or the Stein metric to encode (dis)similarities between SPD matrices. For each pair of training samples, this lets us write a cost function of the form

(5)

where is either or . These pairwise costs can then be grouped together in a global empirical cost function

(6)

which we seek to minimize w.r.t. .

To avoid degeneracies and ensure that the resulting embedding forms a valid SPD manifold, i.e., , we need to have full rank. Here, we enforce this requirement by imposing orthonormality constraints on , i.e., . Note that, with either the AIRM or the Stein divergence, this entails no loss of generality. Indeed, any full rank matrix can be expressed as , with an orthonormal matrix and . The affine invariance property of the AIRM and of the Stein metric therefore guarantees that

Finally, learning can be expressed as the minimization problem

(7)

In the next section, we describe an effective way of solving (7) via optimization on a (different) Riemannian manifold.

4.1 Optimization on Grassmann Manifolds

Recent advances in optimization methods formulate problems with orthogonality constraints as optimization problems on Stiefel or Grassmann manifolds [1]. More specifically, the geometrically correct setting for the minimization problem with the orthogonality constraint is, in general, on a Stiefel manifold. However, if the cost function possesses the property that for any rotation matrix (i.e., ), , then the problem is on a Grassmann manifold.

Since both the AIRM and the Stein metric are affine invariant, we have

and thus , which therefore identifies (7) as an (unconstrained) optimization problem on the Grassmann manifold .

In particular, here, we utilize a nonlinear Conjugate Gradient (CG) method on Grassmann manifolds to minimize (7). A brief description of the steps of this algorithm is provided in appendix 0.B. For a more detailed treatment, we refer the reader to [1]. As for now, we just confine ourselves to saying that nonlinear CG on Grassmann manifolds requires the Jacobian matrix of w.r.t. . For the Stein metric, this Jacobian matrix can be obtained by noting that

(8)

which lets us identify the Jacobian of the Stein divergence as

For the AIRM, we can exploit the fact that . We can then derive the Jacobian by utilizing Eq. 8, which yields

The pseudo-code for our SPD manifold learning (SPD-ML) method is given in Algorithm 1, where denotes the gradient on the manifold obtained from the Jacobian , and denotes the parallel transport of tangent vector from to (see appendix 0.B for details).

  Input:
A set of SPD matrices
The corresponding labels
The dimensionality of the induced manifold
  Output:
The mapping
  Generate using (9), (10) and (11)
   (i.e., the truncated identity matrix)
  
  
  repeat
     
     Line search along the geodesic from in the direction to find
     
     
     
  until convergence
Algorithm 1 SPD Manifold Learning (SPD-ML).

4.2 Designing the Affinity Matrix

Different criteria can be employed to build the affinity matrix . In this work, we focus on classification problems on and therefore exploit class labels to construct . Note, however, that our framework is general and also applies to unsupervised or semi-supervised settings. For example, in an unsupervised scenario, could be built from pairwise similarities (distances) on . Solving (7) could then be understood as finding a mapping where nearby data pairs on the original manifold remain close in the induced manifold .

Let us assume that each point belongs to one of possible classes and denote its class label by . Our aim is to define an affinity matrix that encodes the notions of intra-class and inter-class distances, and thus, when solving (7), yields a mapping that minimizes the intra-class distances while simultaneously maximizing the inter-class distances (i.e., a discriminative mapping).

More specifically, let be the set of labeled training points, where and . The affinity of the training data on can be modeled by building a within-class similarity graph and a between-class similarity graph . In particular, we define and as binary matrices constructed from nearest neighbor graphs. This yields

(9)
(10)

where is the set of nearest neighbors of that share the same label as , and contains the nearest neighbors of having different labels. The affinity matrix is then defined as

(11)

which resembles the Maximum Margin Criterion (MMC) of [12]. In practice, we set to the minimum number of points in each class and, to balance the influence of and , choose , with the specific value found by cross-validation. We analyze the influence of in appendix 0.C.

4.3 Discussion in Relation to Region Covariance Descriptors

In our experiments, we exploited Region Covariance Matrices (RCMs) [21] as image descriptors. Here, we discuss some interesting properties of our algorithm when applied to these specific SPD matrices.

There are several reasons why RCMs are attractive to represent images and videos. First, RCMs provide a natural way to fuse various feature types. Second, they help reducing the impact of noisy samples in a region via their inherent averaging operation. Third, RCMs are independent of the size of the region, and can therefore easily be utilized to compare regions of different sizes. Finally, RCMs can be efficiently computed using integral images [22, 19].

Let be a image, and be a set of observations extracted from , e.g., concatenates intensity values, gradients along the horizontal and vertical directions, filter responses,… for image pixel . Let be the mean value of the observations. Then image can be represented by the RCM

(12)

where . To have a valid RCM, , otherwise

would have zero eigenvalues, which would make both

and indefinite.

After learning the projection , the low-dimensional representation of image is given by . This reveals two interesting properties of our learning scheme. 1) The resulting representation can also be thought of as an RCM with as a set of low-dimensional observations. Hence, in our framework, we can create a valid manifold with only observations instead of at least in the original input space. This is not the case for other algorithms, which require having matrices on as input. In appendix 0.C, we study the influence of the number of observations on recognition accuracy. 2) Applying directly the set of observations reduces the computation time of creating the final RCM on . This is due to the fact that the computational complexity of computing an RCM is quadratic in the dimensionality of the features.

5 Empirical Evaluation

In this section, we study the effectiveness of our SPD manifold learning approach. In particular, as mentioned earlier, we focus on classification and present results on two image datasets and one motion capture dataset. In all our experiments, the dimensionality of the low-dimensional SPD manifold was determined by cross-validation. Below, we first briefly describe the different classifiers used in these experiments, and then discuss our results.

Classification algorithms:

The SPD-ML algorithm introduced in Section 4 allows us to obtain a low-dimensional, more discriminative SPD manifold from a high-dimensional one. Many different classifiers can then be used to categorize the data on this new manifold. In our experiments, we make use of two such classifiers. First, we employ a simple nearest neighbor classifier based on the manifold metric (either AIRM or Stein). This simple classifier clearly evidences the benefits of mapping the original Riemannian structure to a lower-dimensional one. Second, we make use of the Riemannian sparse coding algorithm of [6] (RSR). This algorithm exploits the notion of sparse coding to represent a query SPD matrix using a codebook of SPD matrices. In all our experiments, we formed the codebook purely from the training data, i.e., no dictionary learning was employed. Note that RSR relies on a kernel derived from the Stein metric. We therefore only applied it to the Stein metric-based version of our algorithm. We refer to the different algorithms evaluated in our experiments as:

  • NN-Stein: Stein metric-based Nearest Neighbor classifier.

  • NN-AIRM: AIRM-based Nearest Neighbor classifier.

  • NN-Stein-ML: Stein metric-based Nearest Neighbor classifier on the low-dimensional SPD manifold obtained with our approach.

  • NN-AIRM-ML: AIRM-based Nearest Neighbor classifier on the low-dimensional SPD manifold obtained with our approach.

  • RSR: Riemannian Sparse Representation [6].

  • RSR-ML: Riemannian Sparse Representation on the low-dimensional SPD manifold obtained with our approach.

In addition to these methods, we also provide the results of the PLS-based Covariance Discriminant Learning (CDL) technique of [23], as well as of the state-of-the-art baselines of each specific dataset.

5.1 Material Categorization

For the task of material categorization, we used the UIUC dataset [13]. The UIUC material dataset contains 18 subcategories of materials taken in the wild from four general categories (see Fig. 2): bark, fabric, construction materials, and outer coat of animals. Each subcategory has 12 images taken at various scales. Following standard practice, half of the images from each subcategory was randomly chosen as training data, and the rest was used for testing. We report the average accuracy over 10 different random partitions.

Small RCMs, such as those used for texture recognition in [6], are hopeless here due to the complexity of the task. Recently, SIFT features [14] have been shown to be robust and discriminative for material classification [13]. Therefore, we constructed RCMs of size using 128 dimensional SIFT features (from grayscale images) and 27 dimensional color descriptors. To this end, we resized all the images to and computed dense SIFT descriptors on a regular grid with 4 pixels spacing. The color descriptors were obtained by simply stacking colors from patches centered at the grid points. Each grid point therefore yields one 155-dimensional observation in Eq. 12. The parameters for this experiments were set to (minimum number of samples in a class), and obtained by 5-fold cross-validation.

Table 1 compares the performance of our different algorithms and of the state-of-the-art method on this dataset (SD) [13]. The results show that appropriate manifold-based methods (i.e., RSR and CDL) with the original RCMs already outperform SD, while NN on the same manifold yields worse performance. However, after applying our learning algorithm, NN not only outperforms SD significantly, but also outperforms both CDL and RSR. RSR on the learned SPD manifold (RSR-ML) further boosts the accuracy to .

To further evidence the importance of geometry-aware dimensionality reduction, we replaced our low-dimensional RCMs with RCMs obtained by applying PCA directly on the dimensional features. The AIRM-based NN classifier used on these RCMs gave accuracy (best performance over different PCA dimensions). While this is better than the performance in the original feature space (i.e., ), it is significantly lower than the accuracy of our NN-AIRM-ML approach (i.e., ). Finally, note that performing NN-AIRM on the original data required 490s on a 3GHz machine with Matlab. After our dimensionality reduction scheme, this only took 9.7s.

Figure 2: Samples from the UIUC material dataset.
Method Accuracy SD [13] CDL [23] NN-Stein NN-Stein-ML NN-AIRM NN-AIRM-ML RSR [6] RSR-ML Table 1:

Mean recognition accuracies with standard deviations for the UIUC material dataset 

[13].
Method Accuracy CDL [23] NN-Stein NN-Stein-ML NN-AIRM NN-AIRM-ML RSR [6] RSR-ML Table 2: Recognition accuracies for the HDM05-MOCAP dataset [15].
Figure 3: Kicking action from the HDM05 motion capture sequences database [15].

5.2 Action Recognition from Motion Capture Data

As a second experiment, we tackled the problem of human action recognition from motion capture sequences using the HDM05 database [15]. This database contains the following 14 actions: ‘clap above head’, ‘deposit floor’, ‘elbow to knee’, ‘grab high’, ‘hop both legs’, ‘jog’, ‘kick forward’, ‘lie down floor’, ‘rotate both arms backward’, ‘sit down chair’, ‘sneak’, ‘squat’, ‘stand up lie’ and ‘throw basketball’ (see Fig. 3 for an example). The dataset provides the 3D locations of 31 joints over time acquired at the speed of 120 frames per second. We describe an action of a joints skeleton observed over frames by its joint covariance descriptor [8], which is an SPD matrix of size . This matrix is computed as in Eq. 12 by taking as the -dimensional vector concatenating the 3D coordinates of the joints in frame .

In our experiments, we used 2 subjects for training (i.e., ’bd’ and ’mm’) and the remaining 3 subjects for testing (i.e., ’bk’, ’dg’ and ’tr’)111Note that this differs from the setup in [8], where 3 subjects were used for training and 2 for testing. However, with the setup of [8] where an accuracy of was reported, all our algorithms resulted in about accuracy.. This resulted in 118 training and 188 test sequences for this experiment. The parameters of our method were set to (minimum number of samples in one class), and by cross-validation.

We report the performance of the different methods on this dataset in Table 2. Again we can see that the accuracies of NN and RSR are significantly improved by our learning algorithm, and that our RSR-ML approach achieves the best accuracy of . As on the UIUC dataset, we also evaluated the performance RCMs built by reducing the dimensionality of the features using PCA. This yielded an accuracy of with an AIRM-based NN classifier (best performance over different PCA dimensions). Again, while this slightly outperforms the accuracy of NN-AIRM (i.e., ), it remains clearly inferior to the performance of our NN-AIRM-ML algorithm (i.e., ).

5.3 Face Recognition

For face recognition, we used the ‘b’ subset of the FERET dataset [18], which contains 1800 images from 200 subjects. Following common practice [6], we used cropped images, downsampled to . Fig. 4 depicts samples from the dataset.

We performed six experiments on this dataset. In all these experiments, the training data was composed of frontal faces with expression and illumination variations (i.e., images marked as ‘ba’, ‘bj’ and ‘bk’). The six experiments correspond to using six different non-frontal viewing angles as test data (i.e., images marked as ‘bc’,‘bd’, ‘be’, ‘bf’, ‘bg’ and ‘bh’, respectively).

To represent a face image, we block diagonally concatenated three different RCMs: one obtained from the entire image, one from the left half and one from the right half. This resulted in an RCM of size for each image. Each RCM was computed from the features

where is the intensity value at position , is the response of a 2D Gabor wavelet [11] centered at with orientation and scale , and denotes the magnitude of a complex value. Here, following [6], we generated 40 Gabor filters at 8 orientations and 5 scales.

In addition to our algorithms, we evaluated the state-of-the-art Sparse Representation based Classification (SRC) [24] and its Gabor-based extension (GSRC) [26]. For SRC, we reduced the dimensionality of the data using PCA and chose the dimensionality that gave the best performance. For GSRC, we followed the recommendations of the authors to set the downsampling factor in the Gabor filters, but found that better results could be obtained with a larger than the recommended one, and thus report these better results obtained with . The parameters for our approach were set to (minimum number of samples in one class), and by cross-validation.

Table 3 reports the performance of the different methods. Note that both CDL and RSR outperform the Euclidean face recognition systems SRC and GSRC. Note also that even a simple Stein-based NN on RCMs performs roughly on par with GSRC and better than SRC. More importantly, the representation learned with our SPD-ML algorithm yields significant accuracy gains when used with either NN or RSR for all different viewing angles, with more than improvement for some poses.

(a) ba (b) bj (c) bk (d) bc (e) bd (f) be (g) bf (h) bg (i) bh
Figure 4: Samples from the FERET dataset [18].
Method bc bd be bf bg bh average acc.
SRC [24]
GSRC [26]
CDL [23]
NN-Stein
NN-Stein-ML
NN-AIRM
NN-AIRM-ML
RSR [6]
RSR-ML
Table 3: Recognition accuracies for the FERET face dataset [18].

6 Conclusions and Future Work

We have introduced a learning algorithm to map a high-dimensional SPD manifold into a lower-dimensional, more discriminative one. To this end, we have exploited a graph embedding formalism with an affinity matrix that encodes intra-class and inter-class distances, and where the similarity between two SPD matrices is defined via either the Stein divergence or the AIRM. Thanks to their invariance to affine transformations, these metrics have allowed us to model the mapping from the high-dimensional manifold to the low-dimensional one with an orthonormal projection. Learning could then be expressed as the solution to an optimization problem on a Grassmann manifold. Our experimental evaluation has demonstrated that the resulting low-dimensional SPD matrices lead to state-of-the art recognition accuracies on several challenging datasets.

In the future, we plan to extend our learning scheme to the unsupervised and semi-supervised scenarios. Finally, we believe that this work is a first step towards showing the importance of preserving the Riemannian structure of the data when performing dimensionality reduction, and thus going from one manifold to another manifold of the same type. We therefore intend to study how this framework can be applied to other types of Riemannian manifolds.

Appendix 0.A Proof of Length Equivalence

Here, we prove Theorem 1 from Section 3, i.e., the equivalence between the length of any given curve under the geodesic distance and the Stein metric up to scale of . The proof of this theorem follows several steps. We start with the definition of curve length and intrinsic metric. Without any assumption on differentiability, let be a metric space. A curve in is a continuous function and joins the starting point to the end point .

Definition 4

The length of a curve is the supremum of over all possible partitions , where and .

Definition 5

The intrinsic metric on is defined as the infimum of the lengths of all paths from to .

Theorem 0.A.1[7])

If the intrinsic metrics induced by two metrics and are identical up to a scale , then the length of any given curve is the same under both metrics up to .

Theorem 0.A.2[7])

If and are two metrics defined on a space such that

(13)

uniformly (with respect to and ), then their intrinsic metrics are identical.

Therefore, here, we need to study the behavior of

to prove our theorem on curve length equivalence.

Proof

Let us first note that for an affine invariant metric on ,

where and . Similarly, we can decompose as , with , which yields

Since all our matrices are positive definite, is a diagonal matrix with strictly positive values on its diagonal, and can be written as

with and . This definition can also be motivated by noting that the tangent vectors at are symmetric matrices of the form . Applying the exponential map yields points on the manifold of the form . As mentioned before, with an affine invariant metric, the dependency on and can be dropped.

The previous discussion implies that we just need to study the behavior of the Stein metric around using a diagonal matrix to draw any conclusion. We note that iff . Therefore, given the definitions of and from Section 3 of the paper, we have

(14)
(15)

where L’Hôpital’s rule was used twice from (14) to (15) since the limit in (14) is indefinite. Therefore,

which concludes the proof.

Appendix 0.B Conjugate Gradient on Grassmann Manifolds

In our formulation, we model the projection as a point on a Grassmann manifold . The Grassmann manifold consists of the set of all linear -dimensional subspaces of . In particular, this lets us handle constraints of the form . Learning the projection then boils down to solving a non-linear optimization problem on the Grassmann manifold. Here, we employ a conjugate gradient (CG) method on the manifold, which requires some notions of differential geometry reviewed below.

In differential geometry, the shortest path between two points on a manifold is a curve called a geodesic. The tangent space at a point on a manifold is a vector space that consists of the tangent vectors of all possible curves passing through this point. Unlike flat spaces, on a manifold one cannot transport a tangent vector from one point to another point by simple translation. To get a better intuition, take the case where the manifold is a sphere, and consider two tangent spaces, one located at the pole and one at a point on the equator. Obviously the tangent vectors at the pole do not belong to the tangent space at the equator. Therefore, simple vector translation is not sufficient. As illustrated in Fig. 5, transporting from to on the manifold requires subtracting the normal component at for the resulting vector to be a tangent vector. Such a transfer of tangent vector is called parallel transport. Parallel transport is required by the CG method to compute the new descent direction by combining the gradient direction at the current and previous solutions.

Figure 5: Parallel transport of a tangent vector from a point to another point on the manifold.

On a Grassmann manifold, the above-mentioned operations have efficient numerical forms and can thus be used to perform optimization on the manifold. CG on a Grassmann manifold can be summarized by the following steps:

  • Compute the gradient of the objective function on the manifold at the current solution using

    (16)
  • Determine the search direction by parallel transporting the previous search direction and combining it with .

  • Perform a line search along the geodesic at in the direction . On the Grassmann manifold, the geodesics going from point in direction can be represented by the Geodesic Equation [1]

    (17)

    where is the parameter indicating the location along the geodesic, and

    is the compact singular value decomposition of

    .

These steps are repeated until convergence to a local minimum, or until a maximum number of iterations is reached.

Appendix 0.C Additional Experiments

0.c.1 Parameter Sensitivity

In all our experiments, the parameters of our approach were set in a principled manner (i.e., as the minimum number of samples in one class, and by cross-validation). In this section, we nonetheless study the influence of the number of nearest neighbor from different classes () on the overall performance. To this end, we employed the UIUC material dataset and report the accuracy of our NN-Stein-ML method when varying this parameter and fixing the other to the value reported in Section 5 (). Fig. 6 depicts the recognition accuracy for values of in the interval . Note that for , which is equivalent to mainly considering the intra-class discrimination, the performance drops. For , which makes the inter-class discrimination dominant, the performance drops even further. The maximum performance of is reached for , which again shows that balance between the intra-class and inter-class terms is important. Note that our cross-validation procedure led to , which is not the optimal value on the test data, but still yields good accuracy.

Figure 6: Accuracy on the UIUC material dataset for varying values of .

0.c.2 Influence of the Number of Observations

Finally, as discussed in Section 4.3, we studied the sensitivity of our learning method to the number of observations used to build the RCMs. To this end, we employed the UIUC material dataset. For the training images, where computational cost is unimportant, we generated RCMs using all possible observations (our setup provided us with 9600 observations per image). For the test RCMs, we reduced the number of observations on an octave basis, i.e., downsampled the number of observations by a factor of two repetitively.

Figure 7: Sensitivity of different algorithms to the number of observations used to create RCMs.

Fig. 7 depicts the performance of CDL, as well as of NN classifiers with both the Stein metric and the AIRM, with and without our learning scheme. The point where the number of observations matches the size of the RCM (i.e., minimum number of observations to have a valid SPD matrix) is marked by a vertical dashed line. On the left side of this line, the number of observations is less than . Therefore, for CDL, NN-Stein and NN-AIRM, a small regularizer of the form has to be added to the RCMs to make them positive definite. Note that no such regularizer was necessary when using our approach. From Fig. 7, we can see that all algorithms have a stable performance when the number of observations is large enough. When reducing the number of observations below , the performance of CDL, NN-Stein and NN-AIRM drops down by 17%, 19% and 20%, respectively. In contrast, with our learning algorithm, the drop in performance is less than 7%.

References

  • [1] Absil, P.A., Mahony, R., Sepulchre, R.: Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ, USA (2008)
  • [2] Caseiro, R., Henriques, J., Martins, P., Batista, J.: Semi-intrinsic mean shift on riemannian manifolds. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) Proc. European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol. 7572, pp. 342–355. Springer (2012)
  • [3] Cherian, A., Sra, S., Banerjee, A., Papanikolopoulos, N.: Jensen-bregman logdet divergence with application to efficient similarity search for covariance matrices. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(9), 2161–2174 (2013)
  • [4] Fletcher, P.T., Lu, C., Pizer, S.M., Joshi, S.: Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE Transactions on Medical Imaging 23(8), 995–1005 (2004)
  • [5]

    Goh, A., Vidal, R.: Clustering and dimensionality reduction on riemannian manifolds. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–7. IEEE (2008)

  • [6] Harandi, M.T., Sanderson, C., Hartley, R., Lovell, B.C.: Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach. In: Proc. European Conference on Computer Vision (ECCV), pp. 216–229. Springer (2012)
  • [7] Hartley, R., Trumpf, J., Dai, Y., Li, H.: Rotation averaging. Int. Journal of Computer Vision (IJCV) (2013)
  • [8]

    Hussein, M.E., Torki, M., Gowayyed, M.A., El-Saban, M.: Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In: Proc. Int. Joint Conference on Artificial Intelligence (IJCAI) (2013)

  • [9] Jayasumana, S., Hartley, R., Salzmann, M., Li, H., Harandi, M.: Kernel methods on the riemannian manifold of symmetric positive definite matrices. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2013)
  • [10] Jung, S., Dryden, I.L., Marron, J.: Analysis of principal nested spheres. Biometrika 99(3), 551–568 (2012)
  • [11] Lee, T.S.: Image representation using 2d Gabor wavelets. IEEE Transactions on Pattern Analysis and Machine Intelligence 18(10), 959–971 (1996)
  • [12]

    Li, H., Jiang, T., Zhang, K.: Efficient and robust feature extraction by maximum margin criterion. IEEE Transactions on Neural Networks 17(1), 157–165 (2006)

  • [13] Liao, Z., Rock, J., Wang, Y., Forsyth, D.: Non-parametric filtering for geometric detail extraction and material representation. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2013)
  • [14] Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. Journal of Computer Vision (IJCV) 60(2), 91–110 (2004)
  • [15] Müller, M., Röder, T., Clausen, M., Eberhardt, B., Krüger, B., Weber, A.: Documentation: Mocap database HDM05. Tech. Rep. CG-2007-2, Universität Bonn (2007)
  • [16] Pang, Y., Yuan, Y., Li, X.: Gabor-based region covariance matrices for face recognition. IEEE Transactions on Circuits and Systems for Video Technology 18(7), 989–993 (2008)
  • [17] Pennec, X., Fillard, P., Ayache, N.: A riemannian framework for tensor computing. Int. Journal of Computer Vision (IJCV) 66(1), 41–66 (2006)
  • [18] Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The feret evaluation methodology for face-recognition algorithms. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(10), 1090–1104 (2000)
  • [19] Sanin, A., Sanderson, C., Harandi, M., Lovell, B.: Spatio-temporal covariance descriptors for action and gesture recognition. In: IEEE Workshop on Applications of Computer Vision (WACV). pp. 103–110 (2013)
  • [20]

    Sra, S.: A new metric on the manifold of kernel matrices with application to matrix geometric means. In: Proc. Advances in Neural Information Processing Systems (NIPS). pp. 144–152 (2012)

  • [21] Tuzel, O., Porikli, F., Meer, P.: Region covariance: A fast descriptor for detection and classification. In: Proc. European Conference on Computer Vision (ECCV), pp. 589–600. Springer (2006)
  • [22] Tuzel, O., Porikli, F., Meer, P.: Pedestrian detection via classification on riemannian manifolds. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(10), 1713–1727 (2008)
  • [23] Wang, R., Guo, H., Davis, L.S., Dai, Q.: Covariance discriminative learning: A natural and efficient approach to image set classification. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2496–2503. IEEE (2012)
  • [24] Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2), 210–227 (2009)
  • [25] Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(1), 40–51 (2007)
  • [26] Yang, M., Zhang, L.: Gabor feature based sparse representation for face recognition with Gabor occlusion dictionary. In: Proc. European Conference on Computer Vision (ECCV), pp. 448–461. Springer (2010)