Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices

07/10/2015 ∙ by Anoop Cherian, et al. ∙ MIT Australian National University 0

Data encoded as symmetric positive definite (SPD) matrices frequently arise in many areas of computer vision and machine learning. While these matrices form an open subset of the Euclidean space of symmetric matrices, viewing them through the lens of non-Euclidean Riemannian geometry often turns out to be better suited in capturing several desirable data properties. However, formulating classical machine learning algorithms within such a geometry is often non-trivial and computationally expensive. Inspired by the great success of dictionary learning and sparse coding for vector-valued data, our goal in this paper is to represent data in the form of SPD matrices as sparse conic combinations of SPD atoms from a learned dictionary via a Riemannian geometric approach. To that end, we formulate a novel Riemannian optimization objective for dictionary learning and sparse coding in which the representation loss is characterized via the affine invariant Riemannian metric. We also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision datasets demonstrate superior classification and retrieval performance using our approach when compared to sparse coding via alternative non-Riemannian formulations.



There are no comments yet.


page 1

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Symmetric positive definite (SPD) matrices play an important role as data descriptors in several computer vision applications, for example in the form of region covariances [1]. Notable examples where such descriptors are used include object recognition [2], human detection and tracking [3], visual surveillance [4], 3D object recognition [5], among others. Compared with popular vector space descriptors, such as bag-of-words, Fisher vectors, etc., the second-order structure offered by covariance matrices is particularly appealing. For instance, covariances conveniently fuse multiple features into a compact form independent of the number of data points. By choosing appropriate features, this fusion can be made invariant to affine distortions [6], or robust to static image noise and illumination variations. Moreover, generating these descriptors is easy, for instance using integral image transforms [3].

Fig. 1: An illustration of our Riemannian dictionary learning and sparse coding. For the SPD manifold and learned SPD basis matrices on the manifold, our sparse coding objective seeks a non-negative sparse linear combination of the ’s that is closest (in a geodesic sense) to the given input SPD matrix .

We focus on SPD matrices for dictionary learning with sparse coding (DLSC) – a powerful data representation tool in computer vision and machine learning [7] that enables state-of-the-art results for a variety of applications [8, 9].

Given a training set, traditional (Euclidean) dictionary learning problem seeks an overcomplete set of basis vectors (the dictionary) that can be linearly combined to sparsely represent each input data point; finding a sparse representation given the dictionary is termed sparse coding. While sparse coding was originally conceived for Euclidean vectors, there have been recent extensions of the setup to other data geometries, such as histograms [10], Grassmannians [11], and SPD matrices [12, 13, 14, 15]. The focus of this paper is on dictionary learning and sparse coding of SPD matrix data using a novel mathematical model inspired by the geometry of SPD matrices.

SPD matrices are an open subset of the Euclidean space of symmetric matrices. This may lead one to believe that essentially treating them as vectors may suffice. However, there are several specific properties that applications using SPD matrices demand. For example, in DT-MRI the semidefinite matrices are required to be at infinite distances from SPD matrices [16]. Using this and other geometrically inspired motivation a variety of non-Euclidean distance functions have been used for SPD matrices—see e.g., [16, 17, 18]. The most widely used amongst these is the affine invariant Riemannian metric [18, 19], the only intrinsic Riemannian distance that corresponds to a geodesic distance on the manifold of SPD matrices.

In this paper, we study dictionary learning and sparse coding of SPD matrices in their natural Riemannian geometry. Compared to the Euclidean setup, their Riemannian geometry, however, poses some unique challenges: (i) the manifold defined by this metric is (negatively) curved [20], and thus the geodesics are no more straight-lines; and (ii) in contrast to the Euclidean DLSC formulations, the objective function motivated by the Riemannian distance is not convex in either its sparse coding part or in the dictionary learning part. We present some theoretical properties of our new DLSC formulation and mention a situation of purely theoretical interest where the formulation is convex. Figure 1 conceptually characterizes our goal.

The key contributions of this paper are as follows.

  • Formulation: We propose a new model to learn a dictionary of SPD atoms; each data point is represented as a nonnegative sparse linear combination of SPD atoms from this dictionary. The quality of the resulting representation is measured by the squared intrinsic Riemannian distance.

  • Optimization: The main challenge in using our formulation is its higher computational cost relative to Euclidean sparse coding. However, we describe a simple and effective approach for optimizing our objective function. Specifically, we propose a dictionary learning algorithm on SPD atoms via conjugate gradient descent on the product manifold generated by the SPD atoms in the dictionary.

A forerunner to this paper appeared in [21]

. The current paper differs from our conference paper in the following major aspects: (i) we propose a novel dictionary learning formulation and an efficient solver for it; and (ii) extensive new experiments using our dictionary learning are also included and the entire experimental section re-evaluated under our new setup while also including other datasets and evaluation metrics.

To set the stage for presenting our contributions, we first review key tools and metrics from Riemannian geometry that we use to develop our ideas. Then we survey some recent methods suggested for sparse coding using alternative similarity metrics on SPD matrices. Throughout we work with real matrices; extension to Hermitian positive definite matrices is straightforward. The space of SPD matrices is denoted as , symmetric matrices by , and the space of (real) invertible matrices by . By , for , we mean the principal matrix logarithm and denotes the scalar logarithm of the matrix determinant.

Ii Preliminaries

We provide below a brief overview of the Riemannian geometry of SPD matrices. A manifold is a set of points endowed with a locally-Euclidean structure. A tangent vector at is defined as the tangent to some curve in the manifold passing through . A tangent space defines the union of all such tangent vectors to all possible such curves passing through ; the point is termed the foot of this tangent space. The dimensionality of is the same as that of the manifold. It can be shown that the tangent space is isomorphic to the Euclidean space [22]; thus, they provide a locally-linear approximation to the manifold at its foot.

A manifold becomes a Riemannian manifold if its tangent spaces are endowed with a smoothly varying inner product. The Euclidean space, endowed with the classical inner product defined by the trace function (i.e., for two points

), is a Riemannian manifold. Recall that an SPD matrix has the property that all its eigenvalues are real and positive, and it belongs to the interior of a convex self-dual cone. Since for

SPD matrices, this cone is a subset of the -dimensional Euclidean space of symmetric matrices, the set of SPD matrices naturally forms a Riemannian manifold under the trace metric. However, under this metric, the SPD manifold is not complete 111A space is a complete metric space if all Cauchy sequences are convergent within that space.. This is because, the trace metric does not enclose all Cauchy sequences originating from the interior of the SPD cone [23].

A possible remedy is to change the geometry of the manifold such that positive semi-definite matrices (which form the closure of SPD matrices for Cauchy sequences) are at an infinite distance to points in the interior of the SPD cone. This can be achieved by resorting to the the classical log-barrier function , popular in the optimization community in the context of interior point methods [24]. The trace metric can be modified to the new geometry induced by the barrier function by incorporating the curvature through its Hessian operator at point in the direction given by . The Riemannian metric at for two points is thus defined as:


There are two fundamental operations that one needs for computations on Riemannian manifolds: (i) the exponential map ; and (ii) the logarithmic map , where . The former projects a symmetric point on the tangent space onto the manifold (termed a retraction), the latter does the reverse. Note that these maps depend on the point at which the tangent spaces are computed. In our analysis, we will be measuring distances assuming

to be the identity matrix,

, in which case we will omit the subscript.

Note that the Riemannian metric provides a measure for computing distances on the manifold. Given two points on the manifold, there are infinitely many paths connecting them, of which the shortest path is termed the geodesic. It can be shown that the SPD manifold under the Riemannian metric in (1) is non-positively curved (Hadamard manifold) and has a unique geodesic between every distinct pair of points [25][Chap. 12], [26][Chap. 6]. For , there exists a closed form for this geodesic distance, given by


where is the matrix logarithm. It can be shown that this distance is invariant to affine transformations of the input matrices [19] and thus is commonly referred to as the Affine invariant Riemannian metric. In the sequel, we will use (2) to measure the distance between input SPD matrices and their sparse coded representations obtained by combining dictionary atoms.

Iii Related Work

Dictionary learning and sparse coding of SPD matrices has received significant attention in the vision community due to the performance gains it brings to the respective applications [7, 12, 15]. Given a training dataset , the DLSC problem seeks a dictionary of basis atoms, such that each data point

can be approximated by a sparse linear combination of these atoms while minimizing a suitable loss function. Formally, the DLSC problem can be abstractly written as


where the loss function measures the approximation quality obtained by using the “code” , while regulates the impact of the sparsity penalty .

As alluded to earlier, the manifold geometry hinders a straightforward extension of classical DLSC techniques (such as [27, 28]) to data points drawn from a manifold. Prior methods typically use surrogate similarity distances that bypass the need to operate within the intrinsic Riemannian geometry, e.g., (i) by adapting information geometric divergence measures such as the log-determinant divergence or the Stein divergence, (ii) by using extrinsic metrics such as the log-Euclidean metric, and (iii) by relying on the kernel trick to embed the SPD matrices into a suitable RKHS. We briefly review each of these schemes below.

Statistical measures: In [14] and [29], a DLSC framework is proposed based on the log-determinant divergence (Burg loss) to model the loss. For two SPD matrices , this divergence has the following form . Since this divergence acts as a base measure for the Wishart distribution [4]

—a natural probability density on SPD matrices—a loss defined using it is statistically well-motivated. The sparse coding formulation using this loss reduces to a

MaxDet optimization problem [14] and is solved using interior-point methods. Unsurprisingly, the method is often seen to be computationally demanding even for moderately sized covariances (more than ). Ignoring the specific manifold geometry of SPD matrices, one may directly extend the Euclidean DLSC schemes to the SPD setting. However, a naïve use of Euclidean distance on SPD matrices is usually found inferior in performance. It is argued in [15] that approximating an SPD matrix as sparse conic combinations of positive semi-definite rank-one outer-products of the Euclidean dictionary matrix atoms leads to improved performance under the Euclidean loss. However, the resulting dictionary learning subproblem is nonconvex and the reconstruction quality is still measured using a Euclidean loss. Further, discarding the manifold geometry is often seen to showcase inferior results compared to competitive methods [21].

Differential geometric schemes: Among the computationally efficient variants of Riemannian metrics, one of the most popular is the log-Euclidean metric  [16] defined for as . The operator maps an SPD matrix isomorphically and diffeomorphically into the flat space of symmetric matrices; the distances in this space are Euclidean. DLSC with the squared log-Euclidean metric has been proposed in the past [30] with promising results. A similar framework was suggested recently [31] in which a local coordinate system is defined on the tangent space at the given data matrix. While, their formulation uses additional constraints that make their framework coordinate independent, their scheme restricts sparse coding to specific problem settings, such as an affine coordinate system.

Kernelized Schemes: In [12], a kernelized DLSC framework is presented for SPD matrices using the Stein divergence [17] for generating the underlying kernel function. For two SPD matrices , the Stein divergence is defined as . As this divergence is a statistically well-motivated similarity distance with strong connections to the natural Riemannian metric ( [32, 17]) while being computationally superior, performances using this measure are expected to be similar to those using the Riemannian metric [33]. However, this measure does not produce geodesically exponential kernels for all bandwidths [17] making it less appealing theoretically. In [2, 13] kernels based on the log-Euclidean metric are proposed. A general DLSC setup is introduced for the more general class of Riemannian manifolds in [34]. The main goal of all these approaches is to linearize the curved manifold geometry by projecting the SPD matrices into an infinite dimensional Hilbert space as defined by the respective kernel. However, as recently shown theoretically in [2, 35] most of the curved Riemannian geometries (including the the span of SPD matrices) do not have such kernel maps, unless the geometry is already isometric to the Euclidean space (as in the case of the log-Euclidean metric). This result severely restricts the applicability of traditional kernel methods to popular Riemannian geometries (which are usually curved), thereby providing strong motivation to study the standard machine learning algorithms within their intrinsic geometry — as is done in the current paper.

In light of the above summary, our scheme directly uses the affine invariant Riemannian metric to design our sparse reconstruction loss. To circumvent the computational difficulty we propose an efficient algorithm based on spectral projected gradients for sparse coding, while we use an adaptation of the non-linear conjugate gradient on manifolds for dictionary learning. Our experiments demonstrate that our scheme is computationally efficient and provides state of the art results on several computer vision problems that use covariance matrices.

Iv Problem Formulation

Let denote a set of SPD data matrices, where each . Let denote the product manifold obtained from the Cartesian product of SPD manifolds, i.e.,

. Our goals are (i) to learn a third-order tensor (dictionary)

in which each slice represents an SPD dictionary atom ; and (ii) to approximate each as a sparse conic combination of atoms in ; i.e., where and for an -dimensional vector . With this notation, our joint DLSC objective is


where and are regularizers on the coefficient vectors and the dictionary tensor respectively.

Although formulation (4) may look complicated, it is a direct analogue of the vectorial DLSC setup to matrix data. For example, instead of learning a dictionary matrix in the vectorial DLSC, we learn a third-order tensor dictionary since our data are now matrices. The need to constrain the sparse coefficients to the non-negative orthant is required to make sure the linear combination of SPD atoms stays within the SPD cone. However, in contrast to the vectorial DLSC formulations for which the subproblems on the dictionary learning and sparse coding are convex separately, the problem in (4) is neither convex in itself, nor are its subproblems convex.

From a practical point of view, this lack of convexity it is not a significant concern as all we need is a set of dictionary atoms which can sparse code the input. To this end, we propose below an alternating minimization (descent) scheme that alternates between locally solving the dictionary learning and sparse coding sub-problems, while keeping fixed the variables associated with the other. A full theoretical analysis of the convergence of this nonconvex problem is currently beyond the scope of this paper and of most versions of nonconvex analysis known to us. However, what makes the method interesting and worthy of future analysis is that empirically it converges quite rapidly as shown in Figure 5.

Iv-a Dictionary Learning Subproblem

Assuming the coefficient vectors available for all the data matrices, the subproblem for updating the dictionary atoms can be separated from (4) and written as:


Iv-A1 Regularizers

Before delving into algorithms for optimizing (6), let us recall a few potential regularizers on the dictionary atoms, which are essential to avoid overfitting the dictionary to the data. For SPD matrices, we have several regularizers available, such as: (i) the largest eigenvalue regularizer , (ii) deviation of the dictionary from the identity matrix , (iii) the Riemannian elasticity regularizer [36] which measures the Riemannian deformation of the dictionary from the identity matrix , and (iv) the trace regularizer, i.e., , for a regularization parameter . In the sequel, we use the unit-trace regularizer as it is simpler and performs well empirically.

Iv-A2 Optimizing Dictionary Atoms

Among several first-order alternatives for optimizing over the SPD atoms (such as the steepest-descent, trust-region methods [37], etc.), the Riemannian Conjugate Gradient (CG) method [22][Chap.8], was found to be empirically more stable and faster. Below, we provide a short exposition of the CG method in the context of minimizing over which belongs to an SPD product manifold.

For an arbitrary non-linear function , the CG method uses the following recurrence at step


where the direction of descent is


with defining gradient of at (), and given by


The step-size in (7) is usually found via an efficient line-search method [38]. It can be shown that [38][Sec.1.6] when is quadratic with a Hessian , the directions generated by (8) will be Q-conjugate to previous directions of descent ; thereby (7) providing the exact minimizer of in fewer than iterations ( is the manifold dimension).

For and referring back to (6), the recurrence in (7) will use the Riemannian retraction [22][Chap.4] and the gradient will assume the Riemannian gradient (here we use to represent the dictionary tensor at the -th iteration). This leads to an important issue: the gradients and belong to two different tangent spaces and respectively, and thus cannot be combined as in (9). Thus, following [22][Chapter 8] we resort to vector transport – a scheme to transport a tangent vector at to a point where and is the exponential map. The resulting formula for the direction update becomes




Here for , the map defines the vector transport given by:


The remaining technical detail is the expression for the Riemannian gradient , which we derive next.

Iv-A3 Riemannian Gradient

The following lemma connects the Riemannian gradient to the Euclidean gradient of in (6).

Lemma 1.

For a dictionary tensor , let be a differentiable function. Then the Riemannian gradient satisfies the following condition:


where is the Euclidean gradient of . The Riemannian gradient for the -th dictionary atom is given by .


See [22][Chap. 5]. The latter expression is obtained by substituting the inner product on the LHS of (13) by its definition in (1). ∎

We can derive the Euclidean gradient as follows: let and . Then,


The derivative of (14) w.r.t. to atom is:


Iv-B Sparse Coding Subproblem

Referring back to (4), let us now consider the sparse coding subproblem. Suppose we have a dictionary tensor available. For a data matrix our sparse coding objective is to solve


where is the -th dimension of and is a sparsity inducing function. For simplicity, we use the sparsity penalty , where is a regularization parameter. Since we are working with , we replace this penalty by , which is differentiable.

The subproblem (16) measures reconstruction quality offered by a sparse non-negative linear combination of the atoms to a given input point . It will turn out (see experiments in Section V) that the reconstructions obtained via this model actually lead to significant improvements in performance over sparse coding models that ignore the rich geometry of SPD matrices. But this gain comes at a price: model (16) is a nontrivial to optimize; it remains difficult even if we take into account geodesic convexity of .

While in practice this nonconvexity does not seem to hurt our model, we show below a surprising but intuitive constraint under which Problem (16) actually becomes convex. The following lemma will be useful later.

Lemma 2.

Let , , and be fixed SPD matrices. Consider the function . The derivative is given by


where .


Introduce the shorthand , from definition (2) and using we have

The chain-rule of calculus then immediately yields

which is nothing but (17). ∎

As a brief digression, let us mention below an interesting property of the sparse-coding problem. We do not exploit this property in our experiments, but highlight it here for its theoretical appeal.

Theorem 3.

The function is convex on the set


See Appendix VI. ∎

Let us intuitively describe what Theorem 3 is saying. While sparsely encoding data we are trying to find sparse coefficients , such that in the ideal case we have . But in general this equality cannot be satisfied, and one only has , and the quality of this approximation is measured using or some other desirable loss-function. The loss from (16) is nonconvex while convexity is a “unilateral” property—it lives in the world of inequalities rather than equalities [39]. And it is known that SPD matrices in addition to forming a manifold also enjoy a rich conic geometry that is endowed with the Löwner partial order. Thus, instead of seeking arbitrary approximations , if we limit our attention to those that underestimate as in (18), we might benefit from the conic partial order. It is this intuition that Theorem 3 makes precise.

Iv-B1 Optimizing Sparse Codes

Writing and using Lemma 2 we obtain


Computing (19) for all is the dominant cost in a gradient-based method for solving (4). We present pseudocode (Alg. 1) that efficiently implements the gradient for the first part of (19). The total cost of Alg. 1 is —a naïve implementation of (19) costs , which is substantially more expensive.

Input: ,
; ;
for  to  do
end for
Algorithm 1 Efficient computation of gradients

Alg. 1 in conjunction with a gradient-projection scheme essentially runs the iteration


where denotes the projection operator defined as


Iteration (20) has three major computational costs: (i) the stepsize ; (ii) the gradient ; and (iii) the projection (21). Alg. 1 shows how to efficiently obtain the gradient. The projection task (21) is a special least-squares (dual) semidefinite program (SDP), which can be solved using any SDP solver or by designing a specialized routine. To avoid the heavy computational burden imposed by an SDP, we drop the constraint ; this sacrifices convexity but makes the computation vastly easier, since with this change, we simply have .

In (20), it only remains to specify how to obtain the stepsize . There are several choices available in the nonlinear programming literature [38] for choosing , but most of them can be quite expensive. In our quest for an efficient sparse coding algorithm, we choose to avoid expensive line-search algorithms for selecting and prefer to use the Barzilai-Borwein stepsizes [40], which can be computed in closed form and lead to remarkable gains in performance [40, 41]. In particular, we use the Spectral Projected Gradient (SPG) method [42] by adapting a simplified implementation of [41].

SPG runs iteration (20) using Barzilai-Borwein stepsizes with an occasional call to a nonmontone line-search strategy to ensure convergence of . Without the constraint , we cannot guarantee anything more than a stationary point of (4), while if we were to use the additional constraint then we can even obtain global optimality for iterates generated by (20).

V Experiments

In this section, we provide experiments on simulated and real-world data demonstrating the effectiveness of our algorithm compared to the state-of-the-art DLSC methods on SPD valued data. First, we demonstrate results on simulated data analyzing the performance of our framework for various settings. This will precede experiments on standard benchmark datasets.

V-a Comparison Methods

In the experiments to follow, we will denote dictionary learning and sparse coding algorithms by DL and SC respectively. We will compare our Riemannian (Riem) formulation against combinations of several state-of-the-art DLSC methods on SPD matrices, namely using (i) log-Euclidean (LE) metric for DLSC [30], (ii) Frobenius norm (Frob) which discards the manifold structure, (iii) kernel methods such as the Stein-Kernel [18] proposed in [12] and the log-Euclidean kernel [13].

V-B Simulated Experiments

In this subsection, we evaluate in a controlled setting, some of the properties of our Riemannian sparse coding scheme. For all our simulations, we used covariances generated from data vectors sampled from a zero-mean unit covariance normal distribution. For each covariance sample, the number of data vectors is chosen to be ten times its dimensionality. For fairness of the comparisons, we adjusted the regularization parameters of the sparse coding algorithms so that the codes generated are approximately 10% sparse. The plots to follow show the performance averaged over 50 trials. Further, all the algorithms in this experiment used the SPG method to solve their respective formulations so that their performances are comparable. The intention of these timing comparisons is to empirically point out the relative computational complexity of our Riemannian scheme against the baselines rather than to show exact computational times. For example, for the comparisons against the method Frob-SC, one can vectorize the matrices and then use a vectorial sparse coding scheme. In that case, Frob-SC will be substantially faster, and incomparable to our scheme as it solves a different problem. In these experiments, we will be using the classification accuracy as the performance metric. Our implementations are in MATLAB and the timing comparisons used a single core Intel 3.6GHz CPU.

V-B1 Increasing Data Dimensionality

While DT-MRI applications typically use small SPD matrices (

), the dimensionality is very diverse for other applications in computer vision. For example, Gabor covariances for face recognition uses about

-dimensional SPD matrices [43], while even larger covariance descriptors are becoming common [44]. The goal of this experiment is to analyze the scalability of our sparse coding setup against an increasing size of the data matrices. To this end, we fixed the number of dictionary atoms to be 200, while increased the matrix dimensionality from 3 to 100. Figure 2 plots the time-taken by our method against the naïve Frob-SC method (although it uses the SPG method for solution). The plot shows that the extra computations required by our Riem-SC is not substantial compared to Frob-SC.

V-B2 Increasing Dictionary Size

In this experiment, we compare the scalability of our method to work with larger dictionary tensors. To this end, we fixed the data dimensionality to 10, while increased the number of dictionary atoms from 20 to 1000. Figure 2 plots the time-taken against the dictionary size. As is expected, the sparse coding performance for all the kernelized schemes drops significantly for larger dictionary sizes, while our scheme performs fairly.

V-B3 Increasing Sparsity Regularization

In this experiment, we decided to evaluate the effect of the sparsity promoting regularization in (16

). To this end, we generated a dictionary of 100 atoms from covariances of Gaussian random variables. Later, 1000 SPD matrices are produced using conic combinations of randomly selected atoms. We used an active size of 10 dictionary atoms for all the SPD matrices. After adding random SPD noise to each matrix, we used half of them for learning the dictionary, while the other half was used for evaluating the sparsity regularization. We increased

from to at steps of 10. In Figure 3, we plot the sparsity (i.e., number of non-zero coefficients/size of coefficients) for varying . We see that while the lower values of does not have much influence on sparsity, as increases beyond a certain threshold, sparsity increases. A similar trend is seen for increasing data dimensionality. However, we find that the influence of starts diminishing as the dimensionality increases. For example, sparsity plateaus after for 5-dimensional data, while this happens at nearly for 20-dimensional data. The plateauing of sparsity is not unexpected and is directly related to the Riemannian metric that we use – our loss will prevent all the sparse coefficients from going to zero simultaneously as in such a case the objective will tend to infinity. Further, as the matrix dimensionality increases, it is more likely that the data matrices become ill-conditioned. As a result, this plateau-ing happens much earlier than for better conditioned matrices (as in the case of 5-dimensional matrices in Figure 3).

In Figure 3, we contrast the sparsity pattern produced by our Riemannian sparse coding (Riem-DL + Riem-SC) scheme against that of the traditional sparse coding objective using log-Euclidean sparse coding (LE-DL + LE-SC), for 20-dimensional SPD data. As is expected, the log-Euclidean DL follows the conventional convergence patterns in which sparsity goes to zero for larger values of the regularization. Since for larger regularizations, most of the coefficients in our Riem-SC have low values, we can easily discard them by thresholding. However, we believe this difference in the sparsity patterns needs to be accounted for when choosing the regularization parameters for promoting sparsity in our setup.

V-B4 Convergence for Increasing Dimensionality

In this experiment, we evaluate the convergence properties of our dictionary learning sub-problem based on the Riemannian conjugate gradient scheme. To this end, we used the same setup as in the last experiment using data generated by a pre-defined dictionary, but of different dimensionality (). In Figure 5, we plot the dictionary learning objective against the iterations. As is expected, smaller data dimensionality shows faster convergence. That said, even 20-dimensional data was found to converge in less than 50 alternating iterations of the algorithm, which is remarkable.

Fig. 2: Sparse coding time against (a) increasing matrix dimensionality and (b) increasing number of dictionary atoms. We used a maximum of 100 iterations for all the algorithms.
Fig. 3: Sparsity of coding against (a) increasing sparsity inducing regularization for various matrix dimensionality and (b) sparsity against lambda in comparison to that for log-Euclidean DL.

V-C Experiments with Public Datasets

Now let us evaluate the performance of our framework on computer vision datasets. We experimented on data available from four standard computer vision applications, namely (i) 3D object recognition on the RGBD objects dataset [45], (ii) texture recognition on the standard Brodatz dataset [46], (iii) person re-identification on the ETHZ people dataset [47], and (iv) face recognition on the Youtube faces dataset [48]. We describe these datasets below.

(a) RGBD objects
(b) Brodatz textures
(c) ETHZ appearances
(d) Youtube faces
Fig. 4: Montage of sample images from the four datasets used in our experiments. From top, samples from the RGBD object recognition dataset, Brodatz texture recognition, ETHZ people re-identification dataset, and Youtube face recognition dataset.

V-C1 Brodatz Texture

Texture recognition is one of the most successful applications of covariance descriptors [49, 1]. For this evaluation, we used the Brodatz texture dataset222http://www.ux.uis.no/~tranden/brodatz.html, from which we took 100 gray scale texture images, each of dimension . We extracted patches from a dense grid without overlap thus generating 256 texture patches per image, and totalling 25600 patches in our dataset. To generate covariance descriptors from each patch, we followed the traditional protocol, i.e., we extracted a 5-dimensional feature descriptor from each pixel location in each patch. The features are given by: , where the first two dimensions are the coordinates of a pixel from the top-left corner of a patch, the last three dimensions are the image intensity, and image gradients in the and directions respectively. Region covariances of size are computed from all features in a patch.

V-C2 ETHZ Person Re-identification Dataset

Tracking and identifying people in severely dynamic environments from multiple cameras play an important role in visual surveillance. The visual appearances of people in such applications are often noisy, and low resolution. Further, the appearances undergo drastic variations with respect to their pose, scene illumination, and occlusions. Lately, covariance descriptors have been found to provide a robust setup for this task [12, 50]. In this experiment, we evaluate the performance of clustering people appearances on the benchmark ETHZ dataset [51]. This dataset consists of low-resolution images of tracked people from a real-world surveillance setup. The images are from 146 different individuals. There are about 5–356 images per person. Sample images from this dataset are shown in Figure 4. There are a total of 8580 images in this dataset.

Fig. 5: Dictionary Learning objective using Riemannian conjugate gradient descent against increasing number of iterations (alternating with the sparse coding sub-problem). We plot the convergence of the objective for various dimensionality of the data matrices.
(a) Brodatz textures
(b) RGB 3D objects
(c) ETHZ people
(d) Youtube faces
Fig. 6: Results of Nearest neighbor recall@K accuracy against increasing number of retrieved points (K). Comparisons of Riem-DL and Riem-SC against other DL learning schemes based on clustering, while using our Riem-SC for sparse coding.
Method Accuracy (%)
Riem-DL + Riem-SC 74.9
LE-DL + LE-SC 73.4
Frob-DL + Frob SC 23.5
Kernelized LE DLSC [13][34] 47.9
Kernelized Stein DLSC  [12][34] 76.7
Tensor DL+SC [52] 37.1
GDL [15] 47.7
Random DL + Riem-SC 70.3
TABLE I: Brodatz texture dataset
Method Accuracy (%)
Riem-DL + Riem-SC 80.0
LE-DL + LE-SC 80.5
Frob-DL + Frob SC 54.2
Kernelized LE DLSC [13][34] 86.0
Kernelized Stein DLSC [12][34] 85.7
Tensor DL+SC [52] 68.1
GDL [15] 43.0
Random DL + Riem-SC 62
TABLE II: RGBD objects
Method Accuracy (%)
Riem-DL + Riem-SC 80.5
LE-DL + LE-SC 80.0
Frob-DL + Frob SC 77.6
Kernelized LE DLSC [13][34] 86.6
Kernelized Stein DLSC  [12] [34] 71.6
Tensor DL+SC [52] 67.4
GDL [15] 71.0
Random DL + Riem-SC 54.6
TABLE III: ETHZ People dataset
Method Accuracy (%)
Riem-DL + Riem-SC 92.4
LE-DL + LE-SC 82.6
Frob-DL + Frob SC 82.9
Kernelized LE DSC [13][34] 93.1
Kernelized Stein DLSC [12][34] 70.1
GDL [15] 92.0
Random DL + Riem-SC 83.9
TABLE IV: Youtube Faces Dataset

TABLES: Comparison of classification accuracy (using a linear SVM and one-against-all classification) with sparse coding when the dictionary is learned using the respective DL method. The standard deviation was less than 5% for all methods.

Our goal in this experiment is to evaluate the performance of our DLSC framework to learn generic dictionaries on covariance descriptors produced from this application. Note that some of the classes in this dataset does not have enough instances to learn a specific dictionary for them. Several types of features have been suggested in literature for generating covariances on this dataset that have shown varying degrees of success such as Gabor wavelet based features [50], color gradient based features [12]

, etc. Rather than detailing the results on several feature combinations, we describe here the feature combination that worked the best in our experiments. For this purpose, we used a validation set of 500 covariances and 10 true clusters from this dataset. The performance was evaluated using the Log-Euclidean SC setup with a dictionary learning via Log-Euclidean K-Means. We used a combination of nine features for each image as described below:


where is the x-coordinate of a pixel location, are the RGB color of a pixel, is the pixel intensity in the YCbCr color space, are the gray scale pixel gradients, and is the y-gradient of pixel hue. Further, we also use the gradient angle in our feature set. Each image is resized to a fixed size , and is divided into upper and lower parts. We compute two different region covariances for each part, which are combined as two block diagonal matrices to form a single covariance descriptor of size for each appearance image.

V-C3 3D Object Recognition Dataset

The goal of this experiment is to recognize objects in 3D point clouds. To this end, we used the public RGB-D Object dataset [45], which consists of about 300 objects belonging to 51 categories and spread in about 250K frames. We used approximately 15K frames for our evaluation with approximately 250-350 frames devoted to every object seen from three different view points (30, 45, and 60 degrees above the horizon). Following the procedure suggested in [53][Chap. 5], for every frame, the object was segmented out and 18 dimensional feature vectors generated for every 3D point in the cloud (and thus covariance descriptors); the features we used are as follows:


where the first three dimensions are the spatial coordinates, is the magnitude of the intensity gradient, ’s represent gradients over the depth-maps, and represents the surface normal at the given 3D point.

V-C4 Youtube Faces Dataset

In this experiment, we evaluate the performance of the Riemannian DLSC setup to deal with a larger dataset of high-dimensional covariance descriptors for face recognition. To this end, we used the challenging Youtube faces dataset [48] that consists of 3425 short video clips of 1595 individuals, each clip containing between 48–6K frames. There are significant variations in head pose, context, etc. for each person across clips and our goal is to associate a face with its ground truth person label. We proceed by first cropping out face regions from the frames by applying a state-of-the-art face detector [54], which results in approximately 196K face instances. As most of the faces within a clip do not have significant variations, we subsample this set randomly to generate our dataset of 43K face patches. Next, we convolved the image with a filter bank of 40 Gabor filters with 5 scales and 8 different orientations to extract the facial features for each pixel, generating covariances.

(a) Brodatz textures
(b) RGB 3D objects
(c) ETHZ people
(d) Youtube faces
Fig. 7: Results of k-nearest neighbor retrieval accuracy against an increasing number of retrieved points (K). Comparisons of Riem-DL and Riem-SC against the state of the art dictionary learning and sparse coding combinations.

V-D Experimental Setup

V-D1 Evaluation Techniques

We evaluated our algorithms from two perspectives, namely (i) nearest neighbor (NN) retrieval against a gallery set via computing the Euclidean distances between sparse codes, and (ii) one-against-all classification using a linear SVM trained over the sparse codes. Given that computing the geodesic distance between SPD matrices is expensive, while the Frobenius distance between them results in poor accuracy, the goal of the first experiment is to evaluate the quality of sparse coding to approximate the input data in terms of codes that belong to the non-negative orthant of the Euclidean space – superior performance implying that the sparse codes provide efficient representations that could bypass the Riemannian geometry, and can enable other faster indexing schemes such as locality sensitive hashing for faster retrieval. Our second experiment evaluates the linearity of the space of sparse codes – note that they are much higher dimensional than the original covariances themselves and thus we expect them to be linearly separable in the sparse space.

V-D2 Evaluation Metric

For classification experiments, we use the one-against-all classification accuracy as the evaluation metric. For NN retrieval experiments, we use the Recall@K accuracy, which is defined as follows. Given a gallery and a query set . Recall@K computes the average accuracy when retrieving nearest neighbors from for each instance in . Suppose stands for the set of ground truth class labels associated with the th query, and if denotes the set of labels associated with the neighbors found by some algorithm for the queries, then


V-D3 Data Split

All the experiments used 5-fold cross-validation in which 80% of the datasets were used for training the dictionary, for generating the gallery set or as training set for the linear SVM, and the rest as the test/query points. We evaluate three setups for generating the dictionaries, (i) using a proper dictionary learning strategy, and (ii) using clustering the training set via K-Means using the appropriate distance metric, and (iii) random sampling of the training set.

V-D4 Hyperparameters

The size of the dictionary was considered to be twice the number of classes in the respective dataset. This scheme was considered for all the comparison methods as well. We experimented with larger sizes, but found that performance generally almost saturates. This is perhaps because the datasets that we use already have a large number of classes, and thus the dictionary sizes generated using this heuristic makes them already significantly overcomplete. The other hyperparameter in our setup is the sparsity of the generated codes. As the different sparse coding methods (including ours and the methods that we compare to) have varied sensitivity to the regularization parameter, comparing all the methods to different sparsities turned out to be cumbersome. Thus, we decided to fix the sparsity of all methods to 10%-sparse and adjusted the regularization parameter for each method appropriately (on a small validation set separate from the training set). To this end, we used

for textures, for the ETHZ and RGBD datasets, and 100 for the faces dataset. For the faces dataset, we found it to be difficult to attain the desired sparsity by tuning the regularization parameter. Thus, we used a regularization of 100 and selected the top 10% sparse coefficients.

V-D5 Implementation Details

Our DLSC scheme was implemented in MATLAB. We used the ManOpt Riemannian geometric optimization toolbox [55] for implementing the CG method in our DL sub-problem. As our problem is non-convex, we found that initializing the dictionary learning setup using K-Means clustering (using the Karcher mean algorithm [19]) demonstrate faster convergence.

Method Accuracy (%)
Riem-DL + Riem-SC 74.9
LE-KMeans + Riem-SC 70.0
Frob-KMeans + Riem-SC 66.5
Riem-KMeans +Riem-SC 70.0
TABLE V: Brodatz texture dataset
Method Accuracy (%)
Riem-DL + Riem-SC 80.0
LE-KMeans + Riem-SC 66.2
Frob-KMeans + Riem-SC 61.1
Riem-KMeans +Riem-SC 67.8
TABLE VI: RGBD objects
Method Accuracy (%)
Riem-DL + Riem-SC 80.5
LE-KMeans + Riem-SC 54.9
Frob-KMeans + Riem-SC 55.5
Riem-KMeans +Riem-SC 57.5
TABLE VII: ETHZ People dataset
Method Accuracy (%)
Riem-DL + Riem-SC 92.4
LE-KMeans + Riem-SC 87.1
Frob-KMeans + Riem-SC 88.7
Riem-KMeans +Riem-SC 85.8
TABLE VIII: Youtube Faces Dataset

TABLES: Comparison of classification accuracy (using a linear SVM and one-against-all classification) using Riemannian sparse coding (Riem-SC) while the dictionary atoms are taken as the centroids of K-Means clusters. The standard deviation was less than 8% for all methods.

V-E Results

In this section, we compare the performance of our dictionary learning (Riem-DL) and sparse coding (Riem-SC) method against several state of the art DLSC schemes on the four datasets that we described above. Our choice of comparison methods include (i) Riemannian geometric methods such as log-Euclidean (LE-DL + LE-SC), (ii) Kernelized methods using the Stein kernel (Kernelized Stein) with the framework in [12], (iii) Kernelized Stein using the recent generic framework in [34], (iv) Kernelized Log-Euclidean metric proposed in [13] but using the generic framework in [34], (v) Euclidean DLSC (Frob-DL + Frob-SC), (vi) using a dictionary generated by random sampling the dataset followed by sparse coding using our Riemannian method (Random-DL + Riem-SC), (vii) using the tensor dictionary learning sparse coding (TSC) setup [52], and (viii) generalized dictionary learning [15]. In Figure 7, we show the performance comparison for the task of K-NN where is increased from one to 25. In Tables IVIVIV, and IV, we show the performance for the one-against-all classification setup.

An alternative to dictionary learning that is commonly adopted is to approximate the dictionary by using the centroids of clusters generated from a K-Means clustering of the dataset. Such a method is faster in comparison to a Riemannian DL, while also demonstrate reasonable performance [21, 14]. Thus, an important experiment with regard to learning the dictionary is to make sure using dictionary learning provides superior performance compared to this ad hoc setup. In Figure 6, we plot the K-NN retrieval when we use a clustering scheme to generate the dictionary. In Tables VIVIVIII, and VIII, we show the same in a classification setup.

V-F Discussion of Results

With regard to Figure 7, we found that the performance of different methods is diverse across datasets. For example, the log-euclidean DLSC variant (LE-DL+LE-SC) is generally seen to showcase good performance across datasets. The kernelized DLSC methods (Kernelized Stein and Kernelized LE) demonstrate superior performance on almost all the datasets. The most surprising of the results that we found was for the Frob-DL case. It is generally assumed that using Frobenius distance for comparing SPD matrices leads to poor accuracy, which we see in Figures 7(a)7(b), and 7(c). However, for the Youtube faces dataset, we found that the SPD matrices are poorly conditioned. As a result, taking the logarithm (as in the LE-DL scheme) of these matrices results in amplifying the influence of the smaller eigenvalues, which is essentially noise. When learning a dictionary, the atoms will be learned to reconstruct this noise against the signal, thus leading to inferior accuracy than for FrobDL or GDL which do not use matrix logarithm. We tried to circumvent this problem by tuning the small regularization that we add to the diagonal entries of these matrices, but that did not help. Other older DLSC methods such as TSC are seen to be less accurate compared to recent methods. We could not run the TSC method on the faces dataset as it was found to be too slow to sparse code the larger covariances. In comparison to all the above methods, Riem-DL+Riem-SC was found to produce consistent, competitive (and sometimes better) performance, substantiating the usefulness of our proposed method. While running the experiments, we found that the initialization of our DL sub-problem (from K-Means) played an important role in achieving this superior performance. In Tables IVIVIV, and IV we show the results for classification using the sparse codes. The kernelized LE seems to be significantly better in this setting. However, our Riemannian scheme does demonstrate promise by being the second best in most of the datasets.

The usefulness of our Riem-DL is further evaluated against alternative DL schemes via clustering in Figure 6. We see that learning the dictionary using Riem-DL demonstrates the best performance against the next best and efficient alternative of using the LE-KMeans that was done in [21]. Using Frob-KMeans or using a random dictionary are generally seen to have inferior performance compared to other learning methods. In Tables VIVIVIII, and VIII, a similar trend is seen in the classification setting.

Vi Conclusions

In this paper, we proposed a novel setup for dictionary learning and sparse coding of data in the form of SPD matrices. In contrast to prior methods that use proxy similarity distance measures to define the sparse coding approximation loss, our formulation used a loss driven by the natural Riemannian metric (affine invariant Riemannian metric) on the SPD manifold. We proposed an efficient adaptation of the well-known non-linear conjugate gradient method for learning the dictionary in the product space of SPD manifolds and a fast algorithm for sparse coding based on the spectral projected gradient. Our experiments on simulated and several benchmark computer vision datasets demonstrated the superior performance of our method against prior works; especially our results showed that learning the dictionary using our scheme leads to significantly better accuracy (in retrieval and classification) than other heuristic and approximate schemes to generate the dictionary. Here we prove Theorem 3.

Lemma 4.

Let and let . Then, .

Lemma 5.

The Fréchet derivative [56, see e.g., Ch. 1] of the map at a point in the direction is given by


See e.g., [56, Ch. 11]. ∎

Corollary 6.

Consider the map , where is a map from and , . Then, for , we have

where .


Simply apply the chain-rule of calculus and use linearity of . ∎

Lemma 7.

The Fréchet derivative of the map at a point in direction is given by


We are now ready to prove Theorem 3.

Thm. 3.

We show that the Hessian on . To ease presentation, we write , , and let denote the differential operator . Applying this operator to the first-derivative given by Lemma 2 (in Section IV-B), we obtain (using the product rule) the sum

We now treat these two terms individually. To the first we apply Corr. 6. So

where the inner-product is weighted by and the map . We find a similar inner-product representation for the second term too. Starting with Lemma 7 and simplifying, we obtain

By assumption , which implies . Since is operator monotone [26], it follows that ; an application of Lemma 4 then yields . Thus, we obtain the weighted inner-product

where , whereby is a valid inner-product.

Thus, the second partial derivatives of may be ultimately written as

for some map and some corresponding inner-product (the map and the inner-product are defined by our analysis above). Thus, we have established that the Hessian is a Gram matrix, which shows it is semidefinite. Moreover, if the are different (), then the Hessian is strictly positive definite. ∎


  • [1] O. Tuzel, F. Porikli, and P. Meer., “Region Covariance: A Fast Descriptor for Detection and Classification,” in ECCV, 2006.
  • [2] S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi, “Kernel methods on riemannian manifolds with gaussian rbf kernels,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
  • [3] F. Porikli, and O. Tuzel, “Covariance tracker,”

    Computer Vision and Pattern Recognition

    , June 2006.
  • [4] A. Cherian, V. Morellas, N. Papanikolopoulos, and S. J. Bedros, “Dirichlet process mixture models on symmetric positive definite matrices for appearance clustering in video surveillance applications,” in Computer Vision and Pattern Recognition.   IEEE, 2011.
  • [5] D. Fehr, A. Cherian, R. Sivalingam, S. Nickolay, V. Morellas, and N. Papanikolopoulos, “Compact covariance descriptors in 3D point clouds for object recognition,” in International conference on Robotics and Automation.   IEEE, 2012.
  • [6] B. Ma, Y. Wu, and F. Sun, “Affine object tracking using kernel-based region covariance descriptors,” in Foundations of Intelligent Systems.   Springer, 2012, pp. 613–623.
  • [7] J. Mairal, F. Bach, and J. Ponce, “Sparse modeling for image and vision processing,” Foundations and Trends in Computer Graphics and Vision, vol. 8, no. 2, pp. 85–283, 2014.
  • [8] T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 8, pp. 1576–1588, 2012.
  • [9] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009.
  • [10] A. Cherian, “Nearest neighbors using compact sparse codes,” in International Conference on Machine Learning, 2014.
  • [11] M. Harandi, C. Sanderson, C. Shen, and B. C. Lovell, “Dictionary learning and sparse coding on grassmann manifolds: An extrinsic solution,” in International Conference on Computer Vision.   IEEE, 2013.
  • [12] M. T. Harandi, C. Sanderson, R. Hartley, and B. C. Lovell, “Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach,” in European Conference on Computer Vision.   Springer, 2012.
  • [13] P. Li, Q. Wang, W. Zuo, and L. Zhang, “Log-euclidean kernels for sparse representation and dictionary learning,” in ICCV.   IEEE, 2013.
  • [14] R. Sivalingam, D. Boley, V. Morellas, and N. Papanikolopoulos, “Tensor sparse coding for region covariances,” in ECCV.   Springer, 2010.
  • [15] S. Sra and A. Cherian, “Generalized dictionary learning for symmetric positive definite matrices with application to nearest neighbor retrieval,” in European Conference on Machine Learning.   Springer, 2011.
  • [16] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus on diffusion tensors,” Magnetic Resonance in Medicine, vol. 56, no. 2, pp. 411–421, 2006.
  • [17] S. Sra, “Positive definite matrices and the S-divergence,” arXiv preprint arXiv:1110.1773, 2011.
  • [18] M. Moakher and P. G. Batchelor, “Symmetric positive-definite matrices: From geometry to applications and visualization,” in Visualization and Processing of Tensor Fields.   Springer, 2006, pp. 285–298.
  • [19] X. Pennec, P. Fillard, and N. Ayache, “A Riemannian framework for tensor computing,” International Journal of Computer Vision, vol. 66, no. 1, pp. 41–66, 2006.
  • [20] O. S. Rothaus, “Domains of positivity,” in Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, vol. 24, no. 1.   Springer, 1960, pp. 189–235.
  • [21] A. Cherian and S. Sra, “Riemannian sparse coding for positive definite matrices,” in European Conference on Computer Vision.   Springer, 2014.
  • [22] P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds.   Princeton University Press, 2009.
  • [23] G. Cheng and B. C. Vemuri, “A novel dynamic system in the space of spd matrices with applications to appearance tracking,” SIAM journal on imaging sciences, vol. 6, no. 1, pp. 592–615, 2013.
  • [24] Y. E. Nesterov and M. J. Todd, “On the Riemannian geometry defined by self-concordant barriers and interior-point methods,” Foundations of Computational Mathematics, vol. 2, no. 4, pp. 333–361, 2002.
  • [25] S. Lang, Fundamentals of differential geometry.   Springer Science & Business Media, 2012, vol. 191.
  • [26] R. Bhatia, Positive Definite Matrices.   Princeton University Press, 2007.
  • [27] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006.
  • [28] M. Elad and M. Aharon, “Image denoising via learned dictionaries and sparse representation,” in Computer Vision and Pattern Recognition, 2006.
  • [29] R. Sivalingam, D. Boley, V. Morellas, and N. Papanikolopoulos, “Positive definite dictionary learning for region covariances,” in International Conference on Computer Vision.   IEEE, 2011.
  • [30] K. Guo, P. Ishwar, and J. Konrad, “Action recognition using sparse representation on covariance manifolds of optical flow,” in International Conference on Advanced Video and Signal based Surveillance.   IEEE, 2010.
  • [31] J. Ho, Y. Xie, and B. Vemuri, “On a nonlinear generalization of sparse coding and dictionary learning,” in International Conference on Machine Learning, 2013.
  • [32] A. Cichocki, S. Cruces, and S.-i. Amari, “Log-determinant divergences revisited: Alpha-beta and gamma log-det divergences,” Entropy, vol. 17, no. 5, pp. 2988–3034, 2015.
  • [33] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos, “Jensen-bregman logdet divergence with application to efficient similarity search for covariance matrices,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2161–2174, 2013.
  • [34] M. Harandi and M. Salzmann, “Riemannian coding and dictionary learning: Kernels to the rescue,” Computer Vision and Pattern Recognition, 2015.
  • [35] A. Feragen, F. Lauze, and S. Hauberg, “Geodesic exponential kernels: When curvature and linearity conflict,” arXiv preprint arXiv:1411.0296, 2014.
  • [36] X. Pennec, R. Stefanescu, V. Arsigny, P. Fillard, and N. Ayache, “Riemannian elasticity: A statistical regularization framework for non-linear registration,” in International Conference on Medical Image Computing and Computer Assisted Interventions.   Springer, 2005.
  • [37] P.-A. Absil, C. G. Baker, and K. A. Gallivan, “Trust-region methods on riemannian manifolds,” Foundations of Computational Mathematics, vol. 7, no. 3, pp. 303–330, 2007.
  • [38] D. P. Bertsekas and D. P. Bertsekas, Nonlinear Programming, 2nd ed.   Athena Scientific, 1999.
  • [39] J.-B. Hiriart-Urruty and C. Lemaréchal, Fundamentals of convex analysis.   Springer, 2001.
  • [40] J. Barzilai and J. M. Borwein, “Two-Point Step Size Gradient Methods,” IMA J. Num. Analy., vol. 8, no. 1, 1988.
  • [41] M. Schmidt, E. van den Berg, M. Friedlander, and K. Murphy, “Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm,” in

    International Conference on Artificial Intelligence and Statistics

    , 2009.
  • [42] E. G. Birgin, J. M. Martínez, and M. Raydan, “Algorithm 813: SPG - Software for Convex-constrained Optimization,” ACM Transactions on Mathematical Software, vol. 27, pp. 340–349, 2001.
  • [43] Y. Pang, Y. Yuan, and X. Li, “Gabor-based region covariance matrices for face recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 7, pp. 989–993, 2008.
  • [44] M. T. Harandi, M. Salzmann, and R. Hartley, “From manifold to manifold: Geometry-aware dimensionality reduction for spd matrices,” in European Conference on Computer Vision.   Springer, 2014.
  • [45] K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi-view RGB-D object dataset,” in International Conference on Robotics and Automation, 2011.
  • [46] T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern recognition, vol. 29, no. 1, pp. 51–59, 1996.
  • [47] A. Ess, B. Leibe, and L. V. Gool, “Depth and appearance for mobile scene analysis,” in International Conference on Computer Vision.   IEEE, 2007.
  • [48] T. H. Lior Wolf and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” in Computer Vision and Pattern Recognition.   IEEE, 2011.
  • [49] R. Luis-García, R. Deriche, and C. Alberola-López, “Texture and color segmentation based on the combined use of the structure tensor and the image components,” Signal Processing, vol. 88, no. 4, pp. 776–795, 2008.
  • [50] B. Ma, Y. Su, F. Jurie et al., “Bicov: a novel image representation for person re-identification and face verification,” in BMVC, 2012.
  • [51] W. Schwartz and L. Davis, “Learning Discriminative Appearance-Based Models Using Partial Least Squares,” in Proceedings of the XXII Brazilian Symposium on Computer Graphics and Image Processing, 2009.
  • [52] R. Sivalingam, D. Boley, V. Morellas, and N. Papanikolopoulos, “Tensor dictionary learning for positive definite matrices,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4592–4601, 2015.
  • [53] D. A. Fehr, Covariance based point cloud descriptors for object detection and classification.   University Of Minnesota, 2013.
  • [54]

    X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild,” in

    Computer Vision and Pattern Recognition.   IEEE, 2012.
  • [55] N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre, “Manopt, a matlab toolbox for optimization on manifolds,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1455–1459, 2014.
  • [56] N. Higham, Functions of Matrices: Theory and Computation.   SIAM, 2008.