# Dictionary Learning and Sparse Coding on Statistical Manifolds

In this paper, we propose a novel information theoretic framework for dictionary learning (DL) and sparse coding (SC) on a statistical manifold (the manifold of probability distributions). Unlike the traditional DL and SC framework, our new formulation does not explicitly incorporate any sparsity inducing norm in the cost function being optimized but yet yields sparse codes. Our algorithm approximates the data points on the statistical manifold (which are probability distributions) by the weighted Kullback-Leibeler center/mean (KL-center) of the dictionary atoms. The KL-center is defined as the minimizer of the maximum KL-divergence between itself and members of the set whose center is being sought. Further, we prove that the weighted KL-center is a sparse combination of the dictionary atoms. This result also holds for the case when the KL-divergence is replaced by the well known Hellinger distance. From an applications perspective, we present an extension of the aforementioned framework to the manifold of symmetric positive definite matrices (which can be identified with the manifold of zero mean gaussian distributions), P_n. We present experiments involving a variety of dictionary-based reconstruction and classification problems in Computer Vision. Performance of the proposed algorithm is demonstrated by comparing it to several state-of-the-art methods in terms of reconstruction and classification accuracy as well as sparsity of the chosen representation.

## Authors

• 24 publications
• 4 publications
• 12 publications
• ### An information theoretic formulation of the Dictionary Learning and Sparse Coding Problems on Statistical Manifolds

In this work, we propose a novel information theoretic framework for dic...
04/23/2016 ∙ by Rudrasis Chakraborty, et al. ∙ 0

• ### Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences

This paper introduces sparse coding and dictionary learning for Symmetri...
08/30/2014 ∙ by Mehrtash Harandi, et al. ∙ 0

• ### Structure-Aware Classification using Supervised Dictionary Learning

In this paper, we propose a supervised dictionary learning algorithm tha...
09/29/2016 ∙ by Yael Yankelevsky, et al. ∙ 0

• ### Confident Kernel Sparse Coding and Dictionary Learning

In recent years, kernel-based sparse coding (K-SRC) has received particu...
03/12/2019 ∙ by Babak Hosseini, et al. ∙ 0

• ### Dictionary Learning and Sparse Coding on Grassmann Manifolds: An Extrinsic Solution

Recent advances in computer vision and machine learning suggest that a w...
10/18/2013 ∙ by Mehrtash Harandi, et al. ∙ 0

• ### Dictionary Learning and Sparse Coding for Third-order Super-symmetric Tensors

Super-symmetric tensors - a higher-order extension of scatter matrices -...
09/09/2015 ∙ by Piotr Koniusz, et al. ∙ 0

• ### Learning Discriminative Alpha-Beta-divergence for Positive Definite Matrices (Extended Version)

Symmetric positive definite (SPD) matrices are useful for capturing seco...
08/05/2017 ∙ by Anoop Cherian, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Dictionary learning and sparse coding have found wide applicability in Image/Signal processing, Machine Learning and Computer Vision in recent times. Examples applications include but are not limited to, image classification

[33, 37], image restoration [51][50]

and many others. The traditional dictionary learning (DL) and sparse coding (SC) formulation assumes that the input data lie in a vector space, and assumes a linear generative model for the data by approximating the data with a sparse linear combination of the dictionary atoms (elements). Thus, the objective function of the DL problem typically has a data fidelity term to minimize the “reconstruction error” in the least squares sense. Sparsity is then enforced on the weights in the linear combination via a tolerance threshold on the

-norm of the weight vector. This however leads to an NP-hard problem and the most popular approach for solving this problem (with no convergence guarantees) is the K-SVD based approach [1]. For a fixed dictionary, a convex approximation to the -norm minimization to induce sparsity can be achieved using the -norm constraint on the weight vector [9, 16, 2]. The problem of finding both the optimal dictionary and the sparse-codes however remains to be a hard computational problem in general. For further discussion on this topic and the problem of complete dictionary recovery over a sphere, we refer the readers to [47], where authors provide a provably convergent algorithm.

In many application domains however, the data do not reside in a vector space, instead they reside on a Riemannian manifold such as the Grassmannian [12, 11], the hypersphere [35, 46, 40], the manifold of symmetric positive definite (SPD) matrices [36, 28, 19, 45, 52] and many others. Generalizing the DL & SC problem from the case of vector space inputs to the case when the input data reside on a Riemannian manifold is however difficult because of the nonlinear structure of Riemannian manifolds [52]. One could consider embedding the Riemannian manifold into a Euclidean space, but a problem with this method is that there does not exist a canonical embedding for a general Riemannian manifold. This motivated researchers [52, 23, 45, 29, 53] to generalize the DL and SC problem to Riemannian manifolds. Though the formulation on a Riemannian manifold involves a “reconstruction error” term analogous to the vector space case, defining a sparsity inducing constraint on a manifold is nontrivial and should be done with caution. This is because, a Riemannian manifold lacks “global” vector space structure since it does not have the concept of a global origin. Hence, as argued in [52], one way to impose the sparsity inducing constraint is via an an affine constraint, i.e., the sparsity constraint is over an affine subspace defined by the tangent space at each data point on the manifold. We now briefly review a few representative algorithms for the DL & SC problem on Riemannian manifolds.

A popular solution to the DL problem is to make use of the tangent spaces, which are linear spaces associated with each point on a Riemannian manifold. This approach essentially involves use of linear approximation in the smooth neighborhood of a point. Guo et al. [21] use a Log-Euclidean framework described at length in [5] to achieve a sparse linear representation in the tangent space at the Fréchet mean of the data. Xie et al. [52]

developed a general dictionary learning formulation that can be used for data on any Riemannian manifold. In their approach, for the SC problem, authors use the Riemannian Exponential (Exp.) and Logarithm (Log.) maps to define a generative process for each data point involving a sparse combination of the Log.-mapped dictionary atoms residing on the manifold. This sparse combination is then realized on the manifold via the Exp.-map. Their formulation is a direct generalization of the linear sparsity condition with the exception of the origin of the linear space being at the data point. Further, they impose an affine constraint in the form of the weights in the weight vector summing to one. This constraint implies the use of affine subspaces to approximate the data. For fixed weights however, estimating the dictionary atoms is a hard problem and a manifold line search method is used in their approach. In another method involving DL and SC on the manifold of SPD matrices, Cherian et al.

[13] proposed an efficient optimization technique to compute the sparse codes. Most recently, authors in [41] introduced a novel nonlinear DL and SC method for histograms residing in a simplex. They use the well known Wasserstein distance along with an entropy regularization [32] to reconstruct the histograms that are Wasserstein barycenter approximations of the given data (histograms). They solve the resulting optimization for both the dictionary atoms and weights using a gradient based technique. Authors point out that using the entropy regularization leads to a convex optimization problem. However, authors did not discuss sparsity of the ensuing Wasserstein barycenter dictionary based representation. Sparsity property is of significant importance in many applications and the focus of our work here is on how to achieve sparsity without explicitly enforcing sparsity inducing constraints.

Several recent works report the use of kernels to accomplish dictionary learning and sparse coding on Riemannian manifolds [22, 29, 23]. In these, the Riemannian manifold is embedded into the Reproducing Kernel Hilbert Space (RKHS). DL and SC problems are then formulated in the RKHS. RKHS is a linear space, and hence it is easier to derive simple and effective solutions for the DL and SC problems. Recently, authors in [18] presented conditions that must be strictly satisfied by geodesic exponential Kernels on general Riemannian manifolds. This important and significant result provides guidelines for designing a kernel based approach for general Riemannian manifolds.

In this work, we present a novel formulation of the DL and SC problems for data residing on a statistical manifold, without explicitly enforcing a sparsity inducing constraint. The proposed formulation circumvents the difficulty of directly defining a sparsity constraint on a Riemannian manifold. Our formulation is based on an information theoretic framework and is shown to yield sparse codes. Further, we extend this framework to the manifold of SPD matrices. Note that SPD matrices can be identified with the space of zero mean Gaussian distributions, which is a statistical manifold. Several experimental results are presented that demonstrate the competitive performance of our proposed algorithm in comparison to the state-of-the-art.

The rest of the paper is organized as follows: in Section 2, we first present the conventional DL and SC problem formulation in vector spaces and motivate the need for a new formulation of the DL and SC problem on Riemannian manifolds. This is followed by a brief summary of relevant mathematical background on statistical manifolds. Following this, we summarize the mathematical results in this paper and then present the details along with our algorithm for the DL and SC problem. In Section 3, we present several experimental results and comparisons to the state-of-the-art. Finally, in Section 4, we draw conclusions.

## 2 An Information Theoretic Formulation

In the traditional SC problem, a set of data vectors , and a collection of atoms, , are given. The goal is to express each as a sparse linear combination of atoms in . Let, be the (overcomplete) dictionary matrix of size whose column consists of . Let be a matrix where each consists of the coefficients of the sparse linear combination. In the DL and SC problem, the goal is to minimize the following objective function:

 minA,w1,⋯,wnn∑i=1∥xi−Awi∥2+Sp(wi), (1)

Here, denotes the sparsity promoting term, which can be either an norm or an norm. Since, in the above optimization problem, both the dictionary and the coefficient matrix are unknown, it leads to a hard optimization problem. As this optimization problem is computationally intractable when the sparsity promoting term is an norm constraint, most existing approaches use a convex relaxation to this objective using an norm in place of the norm constraint when performing the sparse coding. Now, instead of the traditional DL & SC setup where data as well as atoms are vector valued, we address the problem when each data point and the atom are probability densities, which are elements of a statistical manifold (see formal definition below). In this paper, we present a novel DL and SC framework for data residing on a statistical manifold. Before delving into the details, we will briefly introduce some pertinent mathematical concepts from Differential Geometry and Statistical Manifolds and refer the reader to [10, 4] for details.

### 2.1 Statistical Manifolds: Mathematical Preliminaries

Let be a smooth () manifold [10]. We say that is -dimensional if is locally Euclidean of dimension , i.e., locally diffeomorphic to . Equipped with the Levi-Civita connection , the triplet is called a statistical manifold whenever both and the dual connection, are torsion free [10, 4].

A point on an -dimensional statistical manifold, (from here on, we will use the symbol to denote a statistical manifold unless specifically mentioned otherwise), can be identified with a (smooth) probability distribution function on a measurable topological space , denoted by [48, 4]. Here, each distribution function can be parametrized using real variables . So, an open subset of a statistical manifold, , is a collection of the probability distribution functions on . And the chart map is the mapping from to the parameter space, . Let be a -finite additive measure defined on a -algebra of subsets of . Let be the density of with respect to the measure and assume the densities to be smooth functions. Now, after giving a topological structure, we can define a Riemannian metric as follows. Let , then a Riemannian metric, can be defined as , where is the expectation of with respect to . In general, is symmetric and positive semi-definite. We can make positive definite by assuming the functions to be linearly independent. This metric is called the Fisher-Rao metric [38, 3] on .

### 2.2 Summary of the mathematical results

In the next section, we propose an alternative formulation to the DL and SC problem. We first state a few theorems as background material that will be used subsequently. Then, we define the new objective function for the DL and SC problem posed on a statistical manifold in Section 2.3.1. Our key mathematical results are stated in Theorems 2.6 and 2.7, Corollary 2.6.1 and 2.6.2 respectively. Using these results, we show that our DL & SC framework, which does not have an explicit sparsity constraint, yields sparse codes. Then, we extend our DL and SC framework to the manifold of SPD matrices, , in Section 2.3.2.

### 2.3 Detailed mathematical results

Let the manifold of probability densities, hereafter denoted by be the -dimensional statistical manifold, i.e., each point on is a probability density. We will use the following notations in the rest of the paper.

• Let be a dictionary with atoms , , where each .

• Let be a set of data points.

• And be nonnegative weights corresponding to data point and atom, and .

Note that, here we assume that each density or is parameterized by . There are many ways to measure the discrepancy between probability densities. One can choose an intrinsic metric and the corresponding distance on a statistical manifold to measure this discrepancy, such as the Fisher-Rao metric [38, 3], which however is expensive to compute. In this paper, we choose an extrinsic measure namely the non-negative divergence measure called the Kullback-Leibler (KL) divergence. The KL divergence [14] between two densities and on is defined by

 KL(f1,f2)=∫f1(x)logf1(x)f2(x)dx (2)

The Hessian of KL-divergence is the Fisher-Rao metric defined earlier. In other words, the KL-divergence between two nearby probability densities can be approximated by half of the squared geodesic distance (induced by the Fisher-Rao metric) between them [3]. The KL-divergence is not a distance as it is not symmetric and does not satisfy the triangle inequality. It is a special case of a broader class of divergences called the -divergences as well as of the well known class of Bregman-divergences. We refer the reader to [6, 30] for more details in this context. Given a set of densities , the KL divergence from to a density can be defined by,

 KL(F,f)=maxiKL(fi,f). (3)

We can define the KL-center of , denoted by , by

 fm(F)=argminfKL(F,f). (4)

The symmetrized KL divergence, also called the Jensen-Shannon divergence (JSD) [14] between two densities and is defined by,

 JSD(f1,f2)=12KL(f1,f2)+12KL(f2,f1). (5)

In general, given the set , define a mixture of densities as, , , , . It is evident that the set of forms a simplex, which is denoted here by . Then, the JSD of the set with the mixture weights is defined as,

 JSD({fi})=H(∑iαifi)−∑iαiH(fi), (6)

where is the Shannon entropy of the density . It is easy to see the following Lemma.

###### Lemma 2.1.

is concave in and JSD attains the minimum at an extreme point of the simplex .

###### Proof.

We refer the reader to [44] for a proof of this Lemma. ∎

In [44], it was shown that one can compute the KL- center of , , in Equation 4 using the following theorem:

###### Theorem 2.2.

The KL center of , , is given by

 fm(F) = ∑i^αifi where ^α = argmaxαJSD({fi}).
###### Proof.

We refer the reader to [44] for a proof of this theorem. ∎

Observe that, the defined in Eq. 3 has the positive-definiteness property, i.e., for any and and if and only if . Both of these properties are evident from the definition of the KL divergence between two densities.

Coding theory interpretation: It should be noted that the above result is the same as the well known redundance-capacity theorem of coding theory presented in [20, 15, 39, 2]. The theorem establishes the equivalence of: the minmax excess risk i.e., redundancy for estimating a parameter from a family/class of sources, the Bayes risk associated with the least favorable prior and the channel capacity when viewing statistical modeling as a communication via a noisy channel. In [2], a stronger result was shown, namely that, the capacity is also a lower bound for “most” sources in the class. The results in [44] however approached this problem from a geometric viewpoint i.e., one of finding barycenters of probability distributions using the KL-divergence as the “distance” measure. Our work presented here takes a similar geometric view point to the problem at hand, namely the DL-SC problem.

Moving on, we now define the KL divergence, denoted by , as:

 KLp(F,f)=∥(KL(f1,f),⋯,KL(fN,f))t∥p. (7)

where, is the norm of the vector and . It is easy to prove the following property of in the following Lemma.

###### Lemma 2.3.

as defined in Eq. 7 is a well-defined statistical divergence for any . Furthermore, the KL divergence as defined in Eq. 3 is a special case of when .

Without any loss of generality, we will assume and refer KL-center to be simply the KL-center for the rest of the paper (unless mentioned otherwise). Now, given the set of densities and a set of weights , we can define the weighted KL-center, denoted by as follows:

 fm(F,{αi})=argminf∑iαiKL(fi,f). (8)

We like to point out that it is however easy to see that the KL-center can not be generalized to the corresponding weighted KL center. The above defined weighted KL-center has the following nice property:

###### Lemma 2.4.

The weighted KL-center as defined in Eq. 8 is a generalization of the KL-center in Eq. 4 (with ). The KL-center can be obtained from the weighted KL-center by substituting , for all .

###### Theorem 2.5.

Given and as above,

###### Proof.

For simplicity, assume that each is discrete and assume that it can take on discrete values, . Then, consider the minimization of with respect to subject to the constraint that is a density, i.e., for the discrete case, . By using a Lagrange multiplier , we get,

 ∂∂f(xj){∑iαi% KL(fi,f)+λ(∑jf(xj)−1)} = 0,∀j ⟹(λ−∑iαifi(xj)f(xj)) = 0,∀j ⟹f(xj)=∑iαifi(xj)λ,∀j.

Now, by taking and equating to , we get , . Thus, . We can easily extend this to the case of continuous by replacing summation with integration and obtain a similar result. ∎

#### 2.3.1 DL and SC on a statistical manifold

Now, we will formulate the DL and SC problems on a statistical manifold. The idea is to express each data point, as a sparse weighted combination of the dictionary atoms, . Given the above hypothesis, our objective function is given by:

 argminG∗,W∗E =N∑i=1KL(fi,^fi) (9) subject to wij≥0,∀i,j (10) ∑jwij=1,∀i. (11)

where, , . In the above objective function, is the minimizer of the weighted KL-center of with weights . The constraint and is required to make a probability density. Note that, we can view as a reconstructed density from the dictionary elements and weights . We will now prove one of our key result namely, that the minimization of the above objective function with respect to yields a sparse set of weights.

###### Theorem 2.6.

Let and be the solution of the objective function in Equation 9. Then,

 (∀j),KL(fi,gj)≥ri, % where ri=∑kwikKL(fi,gj)
###### Proof.

Consider the random variables

with the respective densities . Since each dictionary element is “derived” from , hence, we can view each to be associated with a random variable such that , i.e., is a transformation of random variables . We now have,

 E=N∑i=1KL% (fi,r∑j=1wijgj)=N∑i=1[∫{fi(x)log(fi(x))dx}−∫{fi(x)log(∑jwijgj(x))}dx]

Using Jensen’s inequality we have,

 E≤N∑i=1[∫{fi(x)log(fi(x))dx}−∫{fi(x)∑jwijlog(gj(x))}dx]=N∑i=1EXi[log(fi)−∑jwijlog(gj)]

where, is the expectation of , where is a transformation of the random variable . So,

 E ≤ N∑i=1EXi[log(fi)]−N∑i=1∑jwijEXi[log(gj)] = N∑i=1r∑j=1wijEXi[log(fi)]−N∑i=1r∑j=1wijEXi[log(gj)] = N∑i=1∑jwijEXi[log(fi)−log(gj)] = N∑i=1∑jwijKL(fi,gj)

So, . Since, both and attain minima at the same value (equal to ), we can minimize instead of . Using a Lagrange multiplier for each constraint , and for each constraint , we get the following function

 ∑jwijKL(fi,gj)+N∑i=1ri(1−∑jwij)−∑i,jγijwij

We minimize the above objective function and add the KKT conditions

 γijwij=0

to get,

 KL(fi,gj)={ri+γij,if wij=0ri,if wij>0 (12)

As each , this concludes the proof. ∎

A straightforward Corollary of the above theorem is as follows:

###### Corollary 2.6.1.

The objective function is bounded above by , i.e., .

###### Proof.

From Theorem 2.6, we know that, . From Equation 12, we can now get , . Thus the Corollary holds. ∎

We can see that the dictionary elements, , for which the associated weights are positive, are exactly at the same distance from the density . Corollary 2.6.1 implies that solving the objective function in Equation 9 yields a “tight cluster” structure around each , as minimizing is equivalent to minimizing each .

###### Corollary 2.6.2.

Let, be well approximated by a single dictionary element . Further assume that is a convex combination of a set of dictionary atoms, i.e., . Without loss of generality (WLOG), assume that , . Let, and , . Then, .

###### Proof.

Using the hypothesis in Theorem 2.6, we have,

 ri = ∫fi(x)log(fi(x))dx−∫fi(x)log(gl(x))dx < [∫fi(x)log(fi(x))dx−∫fi(x)r1∑k=1wijklog(gjk)dx] = r1∑k=1wijkKL(fi,gjk) = ^ri

Hence, . Using Corollary 2.6.1, we can see that, in order to represent , the objective function is to minimize . Thus, using Corollary 2.6.1, we can say that a sparse set of weights, i.e., corresponding to , is preferable over a set of non-zero weights, i.e., corresponding to a set of . ∎

Now, we will state and prove the second key result namely, a theorem which states that our proposed algorithm yields non-zero number of atoms whose corresponding weights are zero i.e., for some .

###### Theorem 2.7.

Let , then, with probability , the cardinality of , i.e., , for all .

###### Proof.

Let be the probability measure on . let denote a closed ball of radius centered at , we assume that the measure is bounded, i.e., constants, and such that, , for all . Let us assume, is an -separated set for some . Furthermore, assume that

has finite variance, i.e.,

such that , , we will call this radius (closed) ball as the data ball. Let the optimum value of be , i.e., (see Figure 1). Now, for a given , consider , from Theorem 2.6, we know that if for some , , then, , else, .

Thus, can be rewritten as, . Let, be the number of s in . Then,

follows a Poisson distribution with rate

. Hence, it is easy to see that . Now,

 E[N(r∗,C)] =μ∗(B(f,ϵ/2))(2(C−r∗)ϵ)n ≥(κ1ϵ2)n(2(C−r∗)ϵ)n =(κ1(C−r∗))n

Since we are reconstructing as a convex combination of s, the only case when occurs when all s lie on the boundary of the data ball. Let , clearly, . Hence, with probability , . Now, as , we can say with probability , . Since is arbitrary, the claim holds. This comletes the proof. ∎

Theorem 2.7 states that our proposed algorithm yields -sparse atoms, for some .

Comment on using Hellinger distance: On the space of densities, one can define Hellinger distance (denoted by ) as follows: Given , one can use square root parametrization to map these densities on the unit Hilbert sphere, let the points be denoted by , . Then, one can define the distance between and (the Hellinger distance between and ) as . One can easily see that the above expression is equal to . This metric is the chordal metric on the hypersphere, and hence an extrinsic metric. We now replace KL divergence by the Hellinger distance in our objective function in Eq. 9. The modified objective function is given in Eq. 13.

 argminG∗,W∗E =N∑i=1d2L2(fi,^fi) (13) subject to wij≥0,∀i,j (14) ∑jwij=1,∀i. (15)

One can easily show that the above analysis of sparsity also holds when we replace the KL divergence by the Hellinger distance (as done in Eq. 13). The following Theorem (without proof) states this result.

###### Theorem 2.8.

Let and be the solution of the objective function in Equation 13. Then,

 (∀j),d2L2(fi,gj)≥ri, % where ri=∑kwikd2L2(fi,gj)

Then, analogous to Corollary 2.6.2, it can be easily shown that the set of weights is sparse.

#### 2.3.2 DL and SC on the manifold of SPD matrices

Let, the manifold of SPD matrices be denoted by . We will use the following notations for the rest of the paper. On ,

• be a dictionary with atoms , , where each .

• be a set of data points.

• be nonnegative weights corresponding to the data point and the atom, and .

We now extend the DL and SC formulation to . Note that, a point, can be identified with a Gaussian density with zero mean and covariance matrix . Hence, it is natural to extend our information theoretic DL & SC framework from a statistical manifold to . Recall that the symmetrized KL divergence between two densities and can be defined by the JSD in Equation 5. Using the square root of the JSD, one can define a “distance” between two SPD matrices on (the quotes on distance are used because JSD does not satisfy the traingle inequality for it to be a distance measure). Similar to Equation 8, we can analogously define the symmetrized weighted KL center, denoted by , as the minimizer of the sum of symmetrized squared KL divergences. Given, , we can define the symmetrized KL-center of as follows [49]

 MKL(X)=√B−1√√BA√B√B−1

where , . We can extend the above result to define the symmetrized weighted KL-center via the following Lemma.

###### Lemma 2.9.

On with weights , the symmetrized weighted KL-center, is defined as

 MKL(X,{wi})=√B−1√√BA√B√B−1

where ,

Analogous to Equation 9, we can define our formulation for DL and SC on as follows:

 argminC∗,W∗E =N∑i=1J(Xi,^Xi) (16) where ^Xi=MKL(C,{wij}rj=1) (17) subject to wij≥0,∀i,j (18) ∑jwij=1,∀i. (19)

Here is the symmetrized-KL also known as the J-divergence and is defined as:

 J(X,^X)=14[X−1^X+^X−1X−2n]

Now, we present an algorithm for DL and SC on that will henceforth be labeled as the information theoretic dictionary learning and sparse coding (SDL) algorithm. We use an alternating step optimization procedure, i.e., first learn with held fixed, and then learn with held fixed. We use the well known Nesterov’s accelerated gradient descent [8] adapted to Riemannian manifolds for the optimization. The algorithm is summarized in the Algorithm block 1.

In the algorithm, after the initialization steps up to line , we do an alternative step optimization between and . Line - are updates of using the accelerated gradient descent. In line , we use the Riemannian gradient descent to map the gradient vector on the manifold (to get ) using Riemannian Exponential map () [10]. Then, we update using the Riemannian accelerated gradient descent steps by first lifting on to the tangent space anchored at (using Riemannian inverse exponential map () and then map it on the manifold using map. Then, we recompute the error using the updated and and then iterate.

## 3 Experimental Results

In this section, we present experimental results on several real data sets demonstrating the performance of our algorithm, the SDL. We present two sets of experiments showing the performance in terms of (1) reconstruction error and achieved sparsity on a statistical manifold and, (2) classification accuracy and achieved sparsity on the manifold of SPD matrices, . Though the objective of a DL and SC algorithm is to minimize reconstruction error, due to the common trend (in literature) of using classification accuracy as measure, we report the classification accuracy measure on popular datasets for data on . But since the main thrust of the paper is a novel DL and SC algorithm on a statistical manifold, we present reconstruction error experiments in support of the algorithm performances. All the experimental results reported here were obtained on a desktop with a single 3.33 GHz Intel-i7 CPU with 24 GB RAM. We did not compare our work with the algorithm proposed in [52] since for a moderately large data, their publicly available code makes comparisons computationally infeasible.

### 3.1 Experimental results on the statistical manifold

In order to demonstrate the performance of SDL on MNIST data [26], we randomly chose images from each of the classes. We then represent each image as a probability vector as follows. We consider the image graph and take as the random vector in . The probability mass function (p.m.f.) of is given as: . Now, each image is mapped as a probability vector (or discrete density) and we use our formulation of DL and SC to reconstruct the images. Note that, the reconstruction is upto a scale factor.

In order to compare, we used two popular methods, namely (i) the K-SVD based method in [1] (we chose number of atoms to be twice the number of classes and chose sparsity) and (ii) the Log-Euclidean sparse coding (LE-SC) method [21]. Both of these methods assume that the data lie in a vector space. As the objective functions for these methods are different, hence we use mean squared error (MSE) as a metric to measure reconstruction error. We also report the achieved sparsity by these methods.

From the Table 1, it is evident that, though K-SVD and LE-SC perform better in terms of sparsity, SDL achieved the best reconstruction error while retaining sparse atoms. Some reconstruction results are also shown in Fig. 2. The results clearly indicate that SDL gives “sharper” reconstruction compared to the two competing methods. This is because the formulation of SDL respects the geometry of the underlying data while other two methods do not.

### 3.2 Experimental results on Pn

Now, we will demonstrate the effectiveness of our proposed method SDL compared to the state-of-the-art algorithms on classification using the SCs as features for the classification problem on the manifold of SPD matrices. We report the classification accuracy to measure the performance in the context of classification experiments. Moreover, we also report a measure of sparsity, denoted by , which captures the percentage of the elements of that are . We performed comparisons to three state-of-the-art methods namely, (i) Riemannian sparse coding for SPD matrices (Riem-SC) [13], (ii) Sparse coding using the kernel defined by the symmetric Stein divergence (K-Stein-SC) [23], (iii) Log-Euclidean sparse coding (LE-SC) [21]. For the LE-SC, we used the highly cited SPAMS toolbox [34] to perform the DL and SC on the tangent space.

We tested our algorithm on three commonly used (in this context) and publicly available data sets namely, (i) the Brodatz texture data [7], (ii) the Yale ExtendedB face data [27], and (iii) the ETH80 object recognition data [17]. The data sets are described in detail below. From each of data set, we first extract valued features. Then, SDL learns the dictionary atoms and the sparse codes. Whereas, for the Riem-SC and the kStein-SC, we used k-means on and used the cluster centers as the dictionary atoms. For the Log-Euclidean sparse coding, we used the Riemannian Inverse Exponential map [10] at the Fréchet mean (FM) of the data and performed a Euclidean DL and SC on the tangent space at the FM. For classification, we used the [42] on the sparse codes taken as the features. The SVM parameters are learned using a cross-validation scheme.

Brodatz texture data: This dataset contains texture images. We used the same experimental setup as was used in [43]. Each image is of dimension and we first partitioned each image into non-overlapping blocks of size . From each block, we computed a covariance matrix , summing over the block, where . The matrix is symmetric positive semidefinite. To make this matrix an SPD matrix, we add to it, where is a small positive real number. Thus, the covariance descriptor from each image lies on . For this data, we consider each image as a class, resulting in a class classification problem. As DLM is computationally very expensive, this class classification is infeasible using this method, hence we also randomly selected texture images and performed classification on classes to facilitate this comparison. We took the number of dictionary atoms () to be and for the classes and classes respectively.

Yale face data: This YaleExtendedB face data set contains face images acquired from human subjects under varying pose and illumination conditions. We randomly fixed a pose and for that pose consider all the illuminations, leading to face images taken from human subjects. We used a similar type of experimental setup as described in [12]. From each face image, we construct a SIFT descriptor [31] and take the first principal vectors of this descriptor. Thus, each image is identified with a point on the Grassmann manifold of appropriate dimension. And then, inspired by the isometric mapping between the Grassmannian and [24], we construct the covariance descriptor from the aforementioned principal vectors. Here, we used dictionary atoms.

ETH80 object recognition data: This dataset contains different objects, each having different instances from different views resulting in images. We first segment the objects from each image using the provided ground truth. We used both texture and edge features to construct the covariance matrix. For the texture feature, we used three texture filters [25]. The filter bank is , where , , . In addition to the three texture features, we used the image intensity gradient and the magnitude of the smoothed image using Laplacian of the Gaussian filter. We used dictionary atoms for this data.

Performance comparisons are depicted in Tables 2-3 respectively. All of these three methods are intrinsic, i.e., the DL and SC are tailored to the underlying manifold, i.e., . In order to compute the reconstruction error, we have used the intrinsic affine invariant metric on . From the tables, we can see that SDL yields the best sparsity amongst the three manifold-valued methods (excluding LE-SC). Furthermore, on the Yale-face data set, the SDL is computationally most efficient algorithm when compared to Riem-SC and kStein-SC respectively. In terms of reconstruction error, our proposed method outperforms it’s competitors. Note that, for kStein-SC, computing the reconstruction error is not meaningful as they solved the DLSC problem on the Hilbert space after using a kernel mapping.

We also depict the comparative performance as a function of number of dictionary atoms, for the four algorithms in Fig. 6 (for Brodatz data) and in Fig. 7 (for the Yale face data set). Here, we have shown the comparative performance in terms of classification accuracy, reconstruction error and required CPU time. For both these data sets, we can see the superior performance of SDL over it’s competitors in terms of classification accuracy and sparsity. As the objective of any DL algorithm is to reconstruct the samples, we have also shown the reconstruction error thereby depicting the competitive performance of SDL over the other algorithms.

## 4 Conclusions

In this paper, we presented an information theoretic dictionary learning and sparse coding algorithm for data residing on a statistical manifold. In the traditional dictionary learning approach on a vector space, the goal is to express each data point as a sparse linear combination of the dictionary atoms. This is typically achieved via the use of a data fidelity term and a term to induce sparsity on the coefficients of the linear combination. In this paper, we proposed an alternative formulation of the DL and SC problem for data residing on statistical manifolds, where we do not have an explicit sparsity constraint in our objective function. Our algorithm, SDL, expresses each data point, which is a probability distribution, as a weighted KL-center of the dictionary atoms. We presented a proof that our proposed formulation yields sparsity without explicit enforcement of this constraint and this result holds true when the KL-divergence is replaced by the Hellinger distance between probability densities. Further, we presented an extension of this formulation to data residing on . A Riemannian accelerated gradient descent algorithm was employed to learn the dictionary atoms and an accelerated gradient descent algorithm was employed to learn the sparse weights in a two stage alternating optimization framework. The experimental results demonstrate the effectiveness of the SDL algorithm in terms of reconstruction and classification accuracy as well as sparsity.

Acknowledgements

This research was funded in part by the NSF grants IIS-1525431 and IIS-1724174 to BCV. We thank Dr. Shun-ichi Amari for his insightful comments on a preliminary draft of this manuscript.

## References

• [1] Aharon, M., Elad, M., Bruckstein, A.M.: K-svd and its non-negative variant for dictionary design. In: Optics & Photonics 2005. pp. 591411–591411. International Society for Optics and Photonics (2005)
• [2] Akhtar, N., Shafait, F., Mian, A.: Discriminative bayesian dictionary learning for classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 38(12), 2374–2388 (2016)
• [3] Amari, S.i.: Differential-geometrical methods in statistics, vol. 28. Springer Science & Business Media (2012)
• [4] Amari, S.I., Barndorff-Nielsen, O.E., Kass, R., Lauritzen, S., Rao, C.: Differential geometry in statistical inference. Lecture Notes-Monograph Series pp. i–240 (1987)
• [5]

Arsigny, V., Fillard, P., Pennec, X., Ayache, N.: Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIMAX 29(1) (2007)

• [6] Basu, A., Harris, I.R., Hjort, N.L., Jones, M.: Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3), 549–559 (1998)
• [7] Brodatz, P.: Textures: a photographic album for artists and designers (1966)
• [8] Bubeck, S.: Theory of convex optimization for ml. arXiv:1405.4980 (2014)
• [9] Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics 59(8), 1207–1223 (2006)
• [10] do Carmo Valero, M.P.: Riemannian geometry (1992)
• [11]

Cetingul, H., Vidal, R.: Intrinsic mean shift for clustering on stiefel and grassmann manifolds. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. pp. 1896–1902 (2009)

• [12] Chakraborty, R., Vemuri, B.C.: Recursive fréchet mean computation on grassmannian and its applications to computer vision. ICCV (2015)
• [13] Cherian, A., Sra, S.: Riemannian Sparse Coding for Positive Definite Matrices. European Conference on Computer Vision 8692, 299–314 (2014)
• [14] Cover, T.M., Thomas, J.A.: Elements of information theory (2012)
• [15] Davisson, L., Leon-Garcia, A.: A source matching approach to finding minimax codes. IEEE Transactions on Information Theory 26(2), 166–174 (1980)
• [16] Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.L.: Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Transactions on Information Theory 58(2), 1094–1121 (2012)
• [17]
• [18] Feragen, A., Lauze, F., Hauberg, S.: Geodesic exponential kernels: When curvature and linearity conflict. In: CVPR. pp. 3032–3042 (2015)
• [19]

Fletcher, P.T., Joshi, S.: Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Processing 87(2), 250–262 (2007)

• [20] Gallager, R.G.: Information theory and reliable communication, vol. 2. Springer (1968)
• [21] Guo, K., Ishwar, P., Konrad, J.: Action recognition using sparse representation on covariance manifolds of optical flow. In: AVSS. pp. 188–195 (2010)
• [22] Harandi, M.: Riemannian Coding and Dictionary Learning : Kernels to the Rescue. CVPR pp. 3926–3935 (2015)
• [23] Harandi, M.T., Sanderson, C., Hartley, R., Lovell, B.C.: Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach. ECCV 7573, 216–229 (2012)
• [24] Huang, Z., Wang, R., Shan, S., Chen, X.: Projection metric learning on grassmann manifold with application to video based face recognition. In: CVPR. pp. 140–149 (2015)
• [25] Laws, K.I.: Rapid texture identification. In: Annual technical symposium (1980)
• [26] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)
• [27] Lee, K., Ho, J., Kriegman, D.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(5), 684–698 (2005)
• [28]

Lenglet, C., Rousson, M., Deriche, R., Faugeras, O.D.: Statistics on the manifold of multivariate normal distributions: Theory and application to diffusion tensor MRI processing. Journal of Mathematical Imaging and Vision 25(3), 423–444 (2006)

• [29] Li, P., Wang, Q., Zuo, W., Zhang, L.: Log-Euclidean Kernels for Sparse Representation and Dictionary Learning. ICCV pp. 1601–1608 (2013)
• [30] Liese, F., Vajda, I.: On divergences and informations in statistics and information theory. IEEE Transactions on Information Theory 52(10), 4394–4412 (2006)
• [31] Lowe, D.G.: Object recognition from local scale-invariant features. In: CVPR. vol. 2, pp. 1150–1157 (1999)
• [32] M., C.: Sinkhorn distances: Lightspeed computation of optimal transport. pp. 2292 – 2300 (2013)
• [33] Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Supervised dictionary learning. NIPS pp. 1033–1040 (2009)
• [34] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: International Conference on Machine Learning (ICML). pp. 689–696 (2009)
• [35] Mardia, K.V., Jupp, P.E.: Directional Statistics. John Wiley and Sons LTD (2000)
• [36] Moakher, M.: A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM Journal of Matrix Analysis Applications 26(3), 735–747 (2005)
• [37] Qiu, Q., Patel, V.M., Chellappa, R.: Information-theoretic dictionary learning for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(11), 2173–2184 (2014)
• [38] Rao, C.R.: Fisher-rao metric. Scholarpedia 4(2), 7085 (2009)
• [39] Ryabko, B.Y.: Fast and efficient coding of information sources. IEEE Transactions on Information Theory 40(1), 96–99 (1994)
• [40] Salehian, H., Chakraborty, R., Ofori, E., Vaillancourt, D., Vemuri, B.C.: An efficient recursive estimator of the fréchet mean on a hypersphere with applications to medical image analysis. MICCAI sponsored workshop MFCA (2015)
• [41] Schmitz, M.A., Heitz, M., Bonneel, N., Mboula, F.M.N., Coeurjolly, D., Cuturi, M., Peyré, G., Starck, J.L.: Wasserstein dictionary learning: Optimal transport-based unsupervised non-linear dictionary learning. arXiv preprint arXiv:1708.01955 (2017)
• [42]

Schölkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press (2002)

• [43] Sivalingam, R., Boley, D., Morellas, V., Papanikolopoulos, N.: Tensor sparse coding for region covariances. In: ECCV, pp. 722–735 (2010)
• [44] Spellman, E.: Fusing probability distributions with information theoretic centers and its application to data retrieval. Ph.D. thesis, University of Florida (2005)
• [45] Sra, S., Cherian, A.: Generalized dictionary learning for symmetric positive definite matrices with application to nn retrieval. In: ECML-PKDD (2011)
• [46]

Srivastava, A., Jermyn, I., Joshi, S.: Riemannian analysis of probability density functions with applications in vision. In: CVPR. pp. 1–8 (2007)

• [47] Sun, J., Qu, Q., Wright, J.: Complete dictionary recovery over the sphere i: Overview and the geometric picture. IEEE Transactions on Information Theory 63(2), 853–884 (2017)
• [48] Suzuki, M.: Information geometry and statistical manifold. arXiv preprint arXiv:1410.3369 (2014)
• [49] Wang, Z., Vemuri, B.C.: Dti segmentation using an information theoretic tensor dissimilarity measure. IEEE Transaction on Medical Imaging 24(10), 1267–1277 (2005)
• [50] Wright, J., Yang, a.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE TPAMI 31(2), 210–227 (2009)
• [51] Wright, J., Ma, Y., Mairal, J., Sapiro, G., Huang, T.S., Yan, S.: Sparse representation for computer vision and pattern recognition. Proceedings of IEEE, Special Issue on Applications of Compressive Sensing & Sparse Representation 98(6), 1031–1044 (2010)
• [52] Xie, Y., Ho, J., Vemuri, B.: On A Nonlinear Generalization of Sparse Coding and Dictionary Learning. ICML 28, 1480–1488 (2013)
• [53] Zhang, F., Cen, Y., Zhao, R., Wang, H., Cen, Y., Cui, L., Hu, S.: Analytic separable dictionary learning based on oblique manifold. Neurocomputing 236, 32–38 (2017)