Positive definite matrices and the S-divergence

10/08/2011 ∙ by Suvrit Sra, et al. ∙ Max Planck Society 0

Positive definite matrices abound in a dazzling variety of applications. This ubiquity can be in part attributed to their rich geometric structure: positive definite matrices form a self-dual convex cone whose strict interior is a Riemannian manifold. The manifold view is endowed with a "natural" distance function while the conic view is not. Nevertheless, drawing motivation from the conic view, we introduce the S-Divergence as a "natural" distance-like function on the open cone of positive definite matrices. We motivate the S-divergence via a sequence of results that connect it to the Riemannian distance. In particular, we show that (a) this divergence is the square of a distance; and (b) that it has several geometric properties similar to those of the Riemannian distance, though without being computationally as demanding. The S-divergence is even more intriguing: although nonconvex, we can still compute matrix means and medians using it to global optimality. We complement our results with some numerical experiments illustrating our theorems and our optimization algorithm for computing matrix medians.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Hermitian positive definite (HPD) matrices offer a noncommutative generalization to positive reals. No wonder they abound in a multitude of applications and exhibit striking theoretical properties. For instance, they form a differentiable Riemannian (and also a Finsler) manifold (Bhatia, 2007; Hiai and Petz, 2009) that is the most studied example of a manifold of nonpositive curvature (Bridson and Haeflinger, 1999, Ch.10). HPD matrices possess even more structure: (i) they form a closed, self-dual convex cone; and (ii) they serve as a canonical higher-rank symmetric space (Terras, 1988). The conic view enjoys great importance in convex optimization (Nesterov and Nemirovskii, 1987; Ben-Tal and Nemirovksii, 2001; Nesterov and Todd, 2002), symmetric spaces are important in algebra and analysis (Terras, 1988; Helgason, 2008), and in optimization (Nesterov and Nemirovskii, 1987; Wolkowicz et al., 2000), while the manifold view plays diverse roles—see (Bhatia, 2007, Ch.6) for an excellent overview.

The manifold view of HPD matrices comes with a “natural” distance function while the conic view does not. Nevertheless, drawing motivation from the convex conic view, we introduce the S-Divergence as a “natural” distance-like function on the open cone of positive definite matrices. Indeed, we prove a sequence of results connecting the S-Divergence to the Riemannian metric. We show that (a) this divergence is the square of a metric; and (b) that it has several geometric properties in common with the Riemannian metric, without being numerically as burdensome.

Let us begin by fixing notation. The letter denotes some Hilbert space, though usually just

. The inner product between two vectors

and in is written as (where denotes ‘conjugate transpose’). The set of Hermitian matrices is denoted as . A matrix is called positive definite if

(1.1)

which we also denote by writing . The set of all such matrices is denoted by . We may also speak of positive semidefinite matrices, for which for all ; denoted by . The operator inequality is the usual Löwner order and means . The Frobenius norm of a matrix is defined as , and denotes the operator 2-norm. Let be an analytic function on ; for a matrix , where is unitary, equals .

The set forms a highly-studied differentiable Riemannian manifold with the distance function (see e.g., (Bhatia, 2007, Ch. 6)):

(1.2)

where denotes the matrix logarithm. As a counterpart to (1.2), we introduce the key function of this paper: the Symmetric Stein Divergence111It is a divergence because although nonnegative, definite, and symmetric, it is not a metric. (S-Divergence),

(1.3)

We advocate (1.3) as an alternative to (1.2), and also study several of its properties of independent interest. The simplicity of (1.3) is a major reason for using it as an alternative to (1.2): it is cheaper to compute, as is its derivative, and certain basic algorithms involving it run much faster than corresponding ones that use .

This line of thought originates in Cherian et al. (2011), where for a an image search task based on nearest neighbors, is used instead of for measuring nearness, and is shown to yield large speedups without hurting the quality of search results. Although exact details of this search application lie outside the scope of this paper, let us highlight below the two core speedups that were crucial to Cherian et al. (2011, 2012).

The first speedup is shown in the left panel of Fig. 1, which compares times taken to compute and . For computing the latter, we used the dist.m function in the Matrix Means Toolbox (MMT)222Downloaded from http://bezout.dm.unipi.it/software/mmtoolbox/. The second, more dramatic speedup in shown in the right panel which shows time taken to compute the matrix means

where are HPD matrices. For details on see Section 4.2; the geometric mean is also known as the “Karcher mean”, and was computed using the MMT via the rich.m script which implements a state-of-the-art method Bini and Iannazzo (2011).

Figure 1: Left panel: Times taken to compute and . For

, we show times for the implementation in the MMT which uses Schur factorization. The results are averaged over 10 runs to reduce variance. The plot indicates that

can be up to 50 times faster than .
Right panel: Times taken to compute and . The former was computed using the method of Bini and Iannazzo (2011), while the latter was obtained via a fixed-point iteration. The differences are huge: can be obtained up to 1000 times faster!

We mention here that several other alternatives to are also possible, for instance the popular “log-Euclidean” distance Arsigny et al. (2008), given by

(1.4)

Notice that to compute

we require two eigenvector decompositions; this makes it more expensive than

which requires only 3 Cholesky factorizations. Even though the matrix mean under can be computed in closed form, its dependency on matrix logarithms and exponentials can make it slower than . More importantly, for the application in Cherian et al. (2011), and other alternatives to proved to be less competitive than (1.3), so we limit our focus to ; for more extensive empirical comparisons with other distances, we refer the reader to Cherian et al. (2011).

While our paper was under review, we became aware of a concurrent paper of Chebbi and Moakher (CM) Chebbi and Moahker (2012), who consider a one parameter family of divergences, of which (1.3) is a special case. We summarize below how our work differs from CM:

  • CM prove to be a metric only for commuting matrices. As per Remark 15, the commuting case essentially reduces to the metricity for scalars. The general noncommuting case is much harder and was left as an open problem in Chebbi and Moahker (2012). We solve the general noncommuting case, fully independent of CM.

  • We establish several theorems connecting to the Riemannian metric . These connections have not been made either by CM or elsewhere.

  • A question closely related to metricity of is whether the matrix

    is positive semidefinite for every integer and scalar . CM consider only special cases of this question. We provide a complete characterization of necessary and sufficient for the above matrix to be semidefinite.

  • CM analyze the “matrix-means” problem , whose solution they obtain by solving . They make a simple but crucial oversight by claiming global optimality of this solution, whereas from their arguments only stationarity follows. We prove global optimality.

2 The S-Divergence

We proceed now to formally introduce the S-Divergence. We follow the viewpoint of Bregman divergences. Consider, therefore, a differentiable strictly convex function ; then, , with equality if and only if . The difference between the two sides of this inequality defines the Bregman Divergence333Bregman divergences over scalars and vectors have been well-studied; see e.g., Censor and Zenios (1997); Banerjee et al. (2004). They are called divergences because they are not distances (though they often behave like squared distances, in a sense that can be made precise for certain choices of ).

(2.1)

The scalar divergence (2.1) can be extended to Hermitian matrices, as shown below.

Proposition 1.

Let be differentiable and strictly convex on ; let be arbitrary. Then, we have the inequality

(2.2)

Inequality (2.2) defines a matrix Bregman Divergence—see also Dhillon and Tropp (2007). By construction is nonnegative, strictly convex in , and equals zero if and only if . It is typically asymmetric, and may be viewed as a measure of “directed” dissimilarity.

Example 1.

Let . Then, for , , and (2.2) reduces to the squared Frobenius norm

If on , then , and (2.2) yields the (unnormalized) von Neumann Divergence of quantum information theory Nielsen (2009):

For on , , and we obtain the divergence

which is known as the LogDet Divergence Kulis et al. (2009), or more classically as Stein’s loss Stein (1956).

The divergence is of key importance to our paper, so we mention some additional noteworthy contexts where it occurs: (i) information theory Cover and Thomas (1991), as the relative entropy between two multivariate gaussians with same mean; (ii) optimization, when deriving the famous Broyden-Fletcher-Goldfarb-Shanno (BFGS) updates Nocedal and Wright (1999); (iii) matrix divergence theory Bauschke and Borwein (1997); Dhillon and Tropp (2007); (iv) kernel and metric learning Kulis et al. (2009).

Despite the broad applicability of Bregman divergences, their asymmetry is sometimes undesirable. This drawback prompted researchers to consider symmetric divergences, among which the most popular is the “Jensen-Shannon” divergence444This symmetrization has been largely studied only for divergences over scalars or vectors.

(2.3)

This divergence has two attractive and perhaps more useful representations:

(2.4)
(2.5)

Compare (2.2) with (2.4): both formulas define divergence as departure from linearity; the former uses derivatives, while the latter is stated using midpoint convexity. As a result, representation (2.4) has an advantage over (2.2), (2.3), and (2.5), because unlike them, it does not need to assume differentiability of .

The reader must have by now realized that the S-Divergence (1.3) is nothing but the symmetrized divergence (2.3) generated by . Alternatively, S-Divergence may be essentially viewed as the Jensen-Shannon divergence between two multivariate gaussians Cover and Thomas (1991) or as the Bhattacharya distance between them Bhattacharyya (1943).

Let us now list a few basic properties of .

Lemma 2.

Let

be the vector of eigenvalues of

, and be a diagonal matrix with as its diagonal. Let . Then,

  1. ;

  2. , where , ;

  3. ; and

  4. .

Proof.

(i) follows from the equality .
(ii) Follows upon observing that

(iii) follows after writing

(iv) follows from , and .∎

The most useful corollary to Lemma. 2 is congruence invariance of .

Corollary 3.

Let , and let

be any invertible matrix. Then,

The next result reaffirms that is a divergence; it also shows that enjoys some limited convexity and concavity.

Proposition 4.

Let . Then, (i) with equality if and only if ; (ii) for fixed , is convex in for , while for , it is concave.

Proof.

Since is a sum of Bregman divergences, property (i) follows from definition (2.3). Alternatively, (i) follows, since , with equality if and only if . Part (ii) follows upon analyzing the Hessian . This Hessian can be identified with the matrix

(2.6)

where is the usual the Kronecker product. Matrix is positive definite for and negative definite for , proving (ii). ∎

Below we show that is richer than a divergence: its square-root is actually a metric on . This is the first main result of our paper. Previous authors Cherian et al. (2011); Chebbi and Moahker (2012) conjectured this result but could not establish it, perhaps because both ultimately sought to map to a Hilbert space metric. This approach fails because HPD matrices do not form even a (multiplicative) semigroup, which renders the powerful theory of harmonic analysis on semigroups Berg et al. (1984) inapplicable to . This difficulty necessitates a different path to proving metricity of .

3 The metric

In this section we prove the following main theorem.

Theorem 5.

Define . Then, is a metric on .

The proof of Theorem 5 depends on several results, which we first establish.

Definition 6 ((Berg et al., 1984, Def. 1.1)).

Let be a nonempty set. A function is said to be negative definite if for all , , and the inequality

holds for all integers , and subsets , with .

Theorem 7 ((Berg et al., 1984, Prop. 3.2, Ch. 3)).

Let be negative definite. Then, there is a Hilbert space and a mapping from such that one has the relation

(3.1)

Moreover, negative definiteness of is necessary for such a mapping to exist.

Theorem 7 helps prove the triangle inequality for the scalar case.

Lemma 8.

Define, the scalar version of as

Then, satisfies the triangle inequality, i.e.,

(3.2)
Proof.

We show that is negative definite. Since , Theorem 7 then immediately implies the triangle inequality (3.2). To prove that is negative definite, by (Berg et al., 1984, Thm. 2.2, Ch. 3) we may equivalently show that is a positive definite function for , and . To that end, it suffices to show that the matrix

is HPD for every integer , and positive numbers . Now, observe that

(3.3)

where is the Gamma function. Thus, with , we see that equals the Gram matrix , whereby .∎

Using Lemma 8 obtain the following “Minkowski” inequality for .

Corollary 9.

Let ; and let . Then,

(3.4)
Proof.

Lemma 8 implies that for positive scalars , , and , we have

Exponentiate, sum, and invoke Minkowski’s inequality to conclude (3.4).∎

Theorem 10.

Let be diagonal matrices. Then,

(3.5)
Proof.

For diagonal matrices and , it is easy to verify that

Now invoke Corollary 9 with .∎

Next, we recall an important determinantal inequality for positive matrices.

Theorem 11 ((Bhatia, 1997, Exercise VI.7.2)).

Let . Let denote the vector of eigenvalues of sorted in decreasing order; define likewise. Then,

(3.6)
Corollary 12.

Let . Let denote the diagonal matrix with as its diagonal; define likewise. Then,

Proof.

Scale and by 2, divide each term in (3.6) by , and note that is invariant to permutations of ; take logarithms to conclude.∎

The final result we need is well-known in linear algebra (we provide a proof).

Lemma 13.

Let , and let be Hermitian. There is a matrix for which

(3.7)
Proof.

Let , and define . The the matrix is Hermitian; so let diagonalize it to . Set , to obtain

and by construction, it follows that .∎

Accoutered with the above results, we can finally prove Theorem 5.

Proof.

(Theorem 5).   We need to show that is symmetric, nonnegative, definite, and that is satisfies the triangle inequality. Symmetry is obvious. Nonnegativity and definiteness were shown in Prop. 4. The only hard part is to prove the triangle inequality, a result that has eluded previous attempts (Cherian et al., 2011; Chebbi and Moahker, 2012).

Let be arbitrary. From Lemma 13 we know that there is a matrix such that and . Since is arbitrary, and congruence preserves positive definiteness, we may write just instead of . Also, since (see Lemma 2), proving the triangle inequality reduces to showing that

(3.8)

Consider now the diagonal matrices and . Theorem 10 asserts

(3.9)

Lemma 2(i) implies that and , while Corollary 12 shows that . Combining these inequalities, we obtain (3.8). ∎

After discussing metric properties of

, we turn our attention to a connection of direct importance to machine learning and related areas: kernel functions arising from

. Indeed, some of connections (e.g., Thm. 14

) have already been successfully applied very recently in computer vision 

(Harandi et al., 2012).

3.1 Hilbert space embedding

Since is a metric, and Lemma 8 shows that for scalars, embeds isometrically into a Hilbert space, one may ask if also admits such an embedding. But as mentioned previously, it is the lack of such an embedding that necessitated a different proof of metricity. Let us look more carefully at what goes wrong, and what kind of Hilbert space embeddability does admit.

Theorem 7 implies that a Hilbert space embedding exists if and only if is a negative definite kernel; equivalently, if and only if the map (cf. Lemma 8)

is a positive definite kernel for . It suffices to check whether the matrix

(3.10)

is positive definite for every and HPD matrices .

Unfortunately, a quick numerical experiment reveals that can be indefinite. A counterexample is given by the following positive definite matrices (, )

(3.11)

and by setting , for which . This counterexample destroys hopes of embedding the metric space isometrically into a Hilbert space.

Although matrix (3.10) is not HPD in general, we might ask: For what choices of is HPD? Theorem 14 answers this question for formed from symmetric real positive definite matrices, and characterizes the values of necessary and sufficient for to be positive definite. We note here that the case was essentially treated in (Cuturi et al., 2005), in the context of semigroup kernels on measures.

Theorem 14.

Let be real symmetric matrices in . The matrix defined by (3.10) is positive definite, if and only if satisfies

(3.12)
Proof.

We first prove the “if” part. Recall therefore, the Gaussian integral

Define the map , where the inner-product is given by

Thus, it follows that . The Schur product theorem says that the elementwise product of two positive matrices is again positive. So, in particular is positive whenever is an integer multiple of . To extend the result to all covered by (3.12), we invoke another integral representation: the multivariate Gamma function, defined as (Muirhead, 1982, §2.1.2)

(3.13)

Let , for some constant ; then, compute the inner product

which exists if . Thus, for all defined by (3.12).

The converse is a deeper result that leads into the theory of symmetric cones. More specifically, since the positive matrices form a symmetric cone, and the function is decreasing on this cone (as for all ), an appeal to Theorem VII.3.1 of Faraut and Korányi (1994) allows us to conclude our claim.

Remark 15.

Let be a set of HPD matrices that commute with each other. Then, can be isometrically embedded into a Hilbert space. This claim follows because a commuting set of matrices can be simultaneously diagonalized, and for diagonal matrices, , which is a nonnegative sum of negative definite kernels and is therefore itself negative definite.

Theorem 14 shows that is not a kernel for all , while Remark 15 mentions an extreme case for which is always a positive definte kernel. This prompts us to pose the following problem.

Open problem 1. Determine necessary and sufficient conditions on a set , so that is a kernel function on for all .

4 Connections with

This section returns to our original motivation: using as an alternative to the Riemannian metric . In particular, in this section we show a sequence of results that highlight similarities between and . Table 1 lists these to provide a quick summary. Thereafter, we develop the details.

Riemannian metric Ref. S-Divergence Ref.
(Bhatia, 2007, Ch.6) Lem.2
(Bhatia, 2007, Ch.6) Lem.2
E Lem.2
(Bhatia, 2007, Ex.6.5.4) Th.21
Th.26 Th.26
E Th.16
(Bhatia, 2007, Th.6.1.6) Th.22
(Bhatia, 2007, Th.6.1.2) Th.23
(Bhatia, 2007, Ch. 6) Th.16
(Bougerol, 1993) Th.24
Table 1: Similarities between and at a glance. All matrices are assumed to be in . The scalars satisfy , . An ‘E’ indicates an easily verifiable result.

4.1 Geometric mean

We begin by studying an object that connects and most intimately: the matrix geometric mean (GM). For HPD matrices and , the GM is denoted by , and is given by the formula

(4.1)

The GM (4.1) has numerous attractive properties—see for instance Ando (1979)—among which, the following variational characterization is very important Bhatia and Holbrook (2006):

(4.2)

We show that amazingly, the GM enjoys a similar characterization even with .

Theorem 16.

Let . Then,

(4.3)

Moreover, is equidistant from and , i.e., .

Proof.

If , then clearly minimizes . Assume therefore, that . Ignoring the constraint

for the moment, we see that any stationary point of

must satisfy . This condition translates into

(4.4)

The last equation is a Riccati equation whose unique, positive definite solution is the geometric mean (see (Bhatia, 2007, Prop 1.2.13)).

Next, we show that this stationary point is a local minimum, not a local maximum or saddle point. To that end, we show that the Hessian is positive definite at the stationary point . The Hessian of is given by

Writing , and , upon using (4.4) we obtain

Thus, is a strict local minimum of (3.2). Moreover, this local minimum is actually global, because the positive definite solution to is unique.

To prove the equidistance, recall that ; then observe that

4.2 S-Divergence mean for more than 2 matrices

The arguments presented above can be extended to compute means of more than two matrices. The optimization problem here is (to ease notation, we write instead of )

(4.5)

where , and the weights satisfy . This problem was essentially studied in Cherian et al. (2011), and more thoroughly investigated in Chebbi and Moahker (2012). Both papers essentially considered solving the first-order necessary condition

(4.6)

and both made a critical oversight by claiming the unique positive definite solution to (4.6) to be the global minimum of (4.5), whereas their proofs established only stationarity, neither global nor local optimality. We fix this oversight below.

Lemma 17 (Uniqueness).

Equation (4.6) has a unique HPD solution.

Proof.

See (Chebbi and Moahker, 2012, Prop. 3.10); we include a version of their proof (in our notation) in the appendix for completeness. ∎

Lemma 18.

If and , , then .

Proof.

Easy exercise; but we provide details for completeness. We must prove that for any complex vector (of suitable dimension), it holds that . Equivalently, we may prove that for an arbitrary complex matrix , we have

(4.7)

Since , whenever , it suffices to prove that

(4.8)

But for , , for any . Thus, (4.8) follows immediately. ∎

Theorem 19.

Let be a matrix satisfying (4.6). Then, it is the unique positive definite global minimizer of (4.5).

Proof.

Lemma 17 shows that has a unique positive definite stationary point, say . We show that is actually a local minimum. Global optimality is then immediate from uniqueness of .

To show local optimality, we prove that the Hessian . Ignoring constants, showing this inequality reduces to proving that

(4.9)

Now replace in (4.9) using the condition (4.6); therewith (4.9) turns into