DeepAI

# Fast (1+ε)-approximation of the Löwner extremal matrices of high-dimensional symmetric matrices

Matrix data sets are common nowadays like in biomedical imaging where the Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) modality produces data sets of 3D symmetric positive definite matrices anchored at voxel positions capturing the anisotropic diffusion properties of water molecules in biological tissues. The space of symmetric matrices can be partially ordered using the Löwner ordering, and computing extremal matrices dominating a given set of matrices is a basic primitive used in matrix-valued signal processing. In this letter, we design a fast and easy-to-implement iterative algorithm to approximate arbitrarily finely these extremal matrices. Finally, we discuss on extensions to matrix clustering.

• 75 publications
• 38 publications
03/18/2019

### A Spatial Bayesian Semiparametric Mixture Model for Positive Definite Matrices with Applications to Diffusion Tensor Imaging

Diffusion tensor imaging (DTI) is a popular magnetic resonance imaging t...
05/26/2015

### Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

We explore the connection between two problems that have arisen independ...
04/08/2019

### Geostatistical Modeling of Positive Definite Matrices and Its Applications to Diffusion Tensor Imaging

Geostatistical modeling for continuous point-referenced data has been ex...
03/11/2021

### Compression of volume-surface integral equation matrices via Tucker decomposition for magnetic resonance applications

In this work, we propose a method for the compression of the coupling ma...
02/15/2021

### Gaussian distributions on Riemannian symmetric spaces in the large N limit

We consider Gaussian distributions on certain Riemannian symmetric space...
06/15/2019

### Probabilistic Diffusion MRI Fiber Tracking Using a Directed Acyclic Graph Auto-Regressive Model of Positive Definite Matrices

Diffusion MRI is a neuroimaging technique measuring the anatomical struc...
08/11/2020

### The von Mises-Fisher Procrustes model in functional Magnetic Resonance Imaging data

The Procrustes method allows to align matrices into a common space using...

## 1 Introduction: Löwner extremal matrices and their applications

Let denote the space of square matrices with real-valued coefficients, and

the matrix vector space

333Although addition preserves the symmetric property, beware that the product of two symmetric matrices may be not symmetric. of symmetric matrices. A matrix is said Symmetric Positive Definite [1] (SPD, denoted by ) iff. and only Symmetric Positive Semi-Definite444Those definitions extend to Hermitian matrices . (SPSD, denoted by ) when we relax the strict inequality (). Let denote the space of positive semi-definite matrices, and denote the space of positive definite matrices. A matrix is defined by real coefficients, and so is a SPD or a SPSD matrix. Although is a vector space, the SPSD matrix space does not have the vector space structure but is rather an abstract pointed convex cone with apex

the zero matrix

since . Symmetric matrices can be partially ordered using the Löwner ordering:555Also often written Loewner in the literature, e.g., see [2]. and . When , matrix is said to dominate matrix , or equivalently that matrix is dominated by matrix . Note that the difference of two SPSD matrices may not be a SPSD matrix.666For example, consider and then and . A non-SPSD symmetric matrix can be dominated by a SPSD matrix when .777 For example, is dominated by

(by taking the absolute values of the eigenvalues of

).

The supremum operator is defined on symmetric matrices (not necessarily SPSDs) as follows:

###### Problem 1 (Löwner maximal matrices)
 ¯S=inf{X∈Sym(R) : ∀i∈[n],X⪰Si}, (1)

where .

This matrix is indeed the “smallest”, meaning the tightest upper bound, since by definition there does not exist another symmetric matrix dominating all the ’s and dominated by . Trivially, when there exists a matrix that dominates all others of a set , then the supremum of that set is matrix . Similarly, we define the minimal/infimum matrix as the tightest lower bound. Since matrix inversion reverses the Löwner ordering (), we link those extremal supremum/infimum matrices when considering sets of invertible symmetric matrices as follows: . Extremal matrices are rotational invariant , where

is any orthogonal matrix (

). This property is important in DT-MRI processing that should be invariant to the chosen reference frame.

Computing Löwner extremal matrices are useful in many applications: For example, in matrix-valued imaging [3, 4] (morphological operations, filtering, denoising or image pyramid representations), in formal software verification [5], in statistical inference with domain constraints [6, 7]

, in structure tensor of computer vision

[8] (Förstner-like operators), etc.

This letter is organized as follows: Section 2 explains how to transform the extremal matrix problem into an equivalent geometric minimum enclosing ball of balls. Section 3 presents a fast iterative approximation algorithm that scales well in high-dimensions. Section 4 concludes by hinting at further perspectives.

## 2 Equivalent geometric covering problems

We build on top of [9] to prove that solving the -dimensional Löwner maximal matrix amounts to either find (1) the minimal covering Löwner matrix cone (wrt. set containment ) of a corresponding sets of -dimensional cones (with ), or (2) the minimal enclosing ball of a set of corresponding -dimensional “matrix balls” that we cast into a geometric vector ball covering problem for amenable computations.

### 2.1 Minimal matrix/vector cone covering problems

Let denote the Löwner ordering cone , and the reverted and translated dominance cone (termed the penumbra cone in [9]) with apex embedded in the space of symmetric matrices that represents all the symmetric matrices dominated by : , where denotes the Minkowski set subtraction operator: (hence, ). A matrix dominates iff. . In plain words, dominates a set of matrices iff. its associated dominance cone covers all the dominance cones for . The dominance cones are “abstract” cones defined in the symmetric matrix space that can be “visualized” as equivalent vector cones in dimension using half-vectorization: For a symmetric matrix , we stack the elements of the lower-triangular matrix part of (with ): . Note that this is not the unique way to half-vectorize symmetric matrices but it is enough for geometric containment purposes. Later, we shall enforce that the -norm of vectors matches the Fröbenius matrix norm .

Let denotes the vectorized matrix Löwner ordering cone: , and denote the vector dominance cone: . Next, we further transform this minimum -dimensional matrix/vector cone covering problems as equivalent Minimum Enclosing Ball (MEB) problems of -dimensional matrix/vector balls.

### 2.2 Minimum enclosing ball of ball problems

A basis of a convex cone anchored at the origin is a convex subset so that there exists a unique decomposition: with and . For example, is a basis of the Löwner cone . Informally speaking, a basis of a cone can be interpreted as a compact cross-section of the cone. The Löwner cone is a smooth convex cone with its interior denoting the space of positive definite matrices (full rank matrices), and its border the rank-deficient symmetric positive semi-definite matrices (with apex the zero matrix of rank ). A point is an extreme element of a convex set iff. remains convex. It follows from Minkowski theorem that every compact convex set in a finite-dimensional vector space can be reconstructed as convex combinations of its extreme points : That is, the compact convex set is the closed convex hull of its extreme points.

A face of a closed cone is a subcone such that . The -dimensional faces are the extremal rays of the cone. The basis of the Löwner ordering cone is [10] . Other rank-deficient or full rank matrices can be constructed by convex combinations of these rank- matrices, the extremal rays.

For any square matrix , the trace operator is defined by , the sum of the diagonal elements of the matrix. The trace also amounts to the sum of the eigenvalues of matrix : . The basis of a dominance cone is . Note that all the basis of the dominance cones lie in the subspace of symmetric matrices with zero trace. Let denote the matrix inner product and the matrix Fröbenius norm. Two matrices and are orthogonal (or perpendicular) iff.

. It can be checked that the identity matrix

is perpendicular to any zero-trace matrix since . The center of the ball basis of the dominance cone is obtained as the orthogonal projection of onto the zero-trace subspace : . The dominance cone basis is a matrix ball since for any rank- matrix with (an extreme point), we have the radius:

 r(S)=∥S−tr(S)vv⊤−σ(S)∥F=tr(S)√1−1d, (2)

that is non-negative since we assumed that . Reciprocally, to a basis ball , we can associate the apex of its corresponding dominance cone : . Figure 1 illustrates the notations and the representation of a cone by its corresponding basis and apex. Thus we associate to each dominance cone its corresponding ball basis on the subspace of zero trace matrices: , . We have the following containment relationships: and

Finally, we transform this minimum enclosing matrix ball problem into a minimum enclosing vector ball problem using a half-vectorization that preserves the notion of distances, i.e., using an isomorphism between the space of symmetric matrices and the space of half-vectorized matrices. The -norm of the vectorized matrix should match the matrix Fröbenius norm: . Since , it follows that . We can convert back a vector into a corresponding symmetric matrix.

Since we have considered all dominance cones with basis rooted on in order to compute the ball basis as orthogonal projections, we need to pre-process the symmetric matrices to ensure that property as follows: Let denote the minimal trace of the input set of symmetric matrices , and define for where denotes the identity matrix. Recall that . By construction, the transformed input set satisfies . Furthermore, observe that iff. where , so that .

As a side note, let us point out that the reverse basis-sphere-to-cone mapping has been used to compute the convex hull of -dimensional spheres (convex homothets) from the convex hull of -dimensional equivalent points [11, 12].

Finally, let us notice that there are severals ways to majorize/minorize matrices: For example, once can seek extremal matrices that are invariant up to an invertible transformation [5], a stronger requirement than the invariance by orthogonal transformation. In the latter case, it amounts to geometrically compute the Minimum Volume Enclosing Ellipsoid of Ellipsoids (MVEEE) [5, 13].

### 2.3 Defining (1+ϵ)-approximations of ¯S

First, let us summarize the algorithm for computing the Löwner maximal matrix of a set of symmetric matrices as follows:

1. Normalize matrices so that they have all non-negative traces:

 S′i=Si−tI,t=min{tr(S1),…,tr(Sn)}.
2. Compute the vector ball representations of the dominance cones:

 Bi=Ball(σi,ri)

with

 σi=vec+(S′i−tr(S′i)dI)

and

 ri=tr(S′i)√1−1d
3. Compute the small(est) enclosing ball of basis balls (either exactly or an approximation):

 B′=Small(est)EnclosingBall(B1,…,Bn)
4. Convert back the small(est) enclosing ball to the dominance cone, and recover its apex :

 ¯S′=σ′+r′dI√1−1d.
5. Adjust back the matrix trace:

 ¯S=¯S′+tI,t=min{tr(S1),…,tr(Sn)}.

Computing exactly the extremal Löwner matrices suffer from the curse of dimensionality of computing MEBs [14]. In [9], Burgeth et al. proceed by discretizing the basis spheres by sampling888In 2D, we sample for . In 3D, we use spherical coordinates for and . the extreme x points for . This yields an approximation term, requires more computation, and even worse the method does not scale [15] in high-dimensions. Thus in order to handle high-dimensional matrices met in software formal verification [5] or in computer vision (structure tensor [8]), we consider -approximation of the extremal Löwner matrices. The notion of tightness of approximation of (the epsilon) is imported straightforwardly from the definition of the tightness of the geometric covering problems. A -approximation of is a matrix such that: . It follows from Eq. 2 that a -approximation satisfies .

We present a fast guaranteed approximation algorithm for approximating the minimum enclosing ball of a set of balls (or more generally, for sets of compact geometric objects).

## 3 Approximating the minimum enclosing ball of objects and balls

We extend the incremental algorithm of Bădoiu and Clarkson [16] (BC) designed for finite point sets to ball sets or compact object sets that work in large dimensions. Let denote a set of balls. For an object and a query point , denote by the farthest distance from to : , and let denote the farthest point of from . The generalized BC  [16] algorithm for approximating the circumcenter of the minimum volume enclosing ball of objects (MVBO) is summarized as follows:

• Let and .

• Repeat times:

• Find the farthest object to current center:

• Update the circumcenter:

• .

When considering balls as objects, the farthest distance of a point to a ball is , and the circumcenter updating rule is: . See Figure 2 and online video for an illustration. (MVBO can also be used to approximate the MEB of ellipsoids.) It is proved in [17] that at iteration , we have where is the unique smallest enclosing ball. Hence the radius of the ball centered at is bounded by . To get a -approximation, we need iterations.s It follows that a -approximation of the smallest enclosing ball of -dimensional balls can be computed in -time [17], and since we get:

###### Theorem 1

The Löwner maximal matrix of a set of -dimensional symmetric matrices can be approximated by a matrix such that in -time.

Interestingly, this shows that the approximation of Löwner supremum matrices admits core-sets [17], the subset of farthest balls chosen during the iterations, so that with . See [18] for other MEB approximation algorithms.

To a symmetric matrix , we associate a quadratic form that is a strictly convex function when is PSD. Therefore, we may visualize the SPSD matrices in 2D/3D as ellipsoids (potentially degenerated flat ellipsoids for rank-deficient matrices). More precisely, we associate to each positive definite matrix , a geometric ellipsoid defined by , where is a prescribed constant (usually set to , Figure 3). From the SVD decomposition of , we recover the rotation matrix, and the semi-radii of the ellipsoid are the square root eigenvalues . It follows that . To handle degenerate flat ellipsoids that are not fully dimensional (rank-deficient matrix ), we define

. Note that those ellipsoids are all centered at the origin, and may also conceptually be thought as centered Gaussian distributions (or covariance matrices denoting the concentration ellipsoids of estimators

[2] in statistics). We can also visualize the Löwner ordering cone and dominance cones for matrices embedded in the vectorized 3D space of symmetric matrices (Figure 3), and the corresponding half-vectorized ball basis (Figure 3).

## 4 Concluding remarks

Our novel extremal matrix approximation method allows one to leverage further related results related to core-sets [16] for dealing with high-dimensional extremal matrices. For example, we may consider clustering PSD matrices with respect to Löwner order and use the -center clustering technique with guaranteed approximation [19, 20]. A Java™ code of our method is available for reproducible research.

## Acknowledgements

This work was carried out during the Matrix Information Geometry (MIG) workshop [21], organized at École Polytechnique, France in February 2011 (https://www.sonycsl.co.jp/person/nielsen/infogeo/MIG/). Frank Nielsen dedicates this work to the memory of his late father Gudmund Liebach Nielsen who passed away during the last day of the workshop.

## References

• [1] R. Bhatia, Positive definite matrices.   Princeton university press, 2009.
• [2] M. Siotani, “Some applications of Loewner’s ordering on symmetric matrices,” Annals of the Institute of Statistical Mathematics, vol. 19, no. 1, pp. 245–259, 1967.
• [3] J. Angulo, “Supremum/infimum and nonlinear averaging of positive definite symmetric matrices,” Matrix Information Geometry, pp. 3–33, 2013.
• [4] B. Burgeth, A. Bruhn, N. Papenberg, M. Welk, and J. Weickert, “Mathematical morphology for matrix fields induced by the Loewner ordering in higher dimensions,” Signal Processing, vol. 87, 2007.
• [5] X. Allamigeon, S. Gaubert, E. Goubault, S. Putot, and N. Stott, “A scalable algebraic method to infer quadratic invariants of switched systems,” in Embedded Software (EMSOFT), 2015 International Conference on, Oct 2015, pp. 75–84.
• [6]

J. A. Calvin and R. L. Dykstra, “Maximum likelihood estimation of a set of covariance matrices under Löwner order restrictions with applications to balanced multivariate variance components models,”

The Annals of Statistics, pp. 850–869, 1991.
• [7] M.-T. Tsai, “Maximum likelihood estimation of Wishart mean matrices under Löwner order restrictions,”

Journal of Multivariate Analysis

, vol. 98, no. 5, pp. 932–944, 2007.
• [8] W. Förstner, “A Feature Based Correspondence Algorithm for Image Matching,” Int. Arch. of Photogrammetry and Remote Sensing, vol. 26, no. 3, pp. 150–166, 1986.
• [9] B. Burgeth, A. Bruhn, S. Didas, J. Weickert, and M. Welk, “Morphology for matrix data: Ordering versus PDE-based approach,” Image and Vision Computing, vol. 25, no. 4, pp. 496–511, 2007.
• [10] R. D. Hill and S. R. Waters, “On the cone of positive semidefinite matrices,” Linear Algebra and its Applications, vol. 90, pp. 81–88, 1987.
• [11] J.-D. Boissonnat, A. Cérézo, O. Devillers, J. Duquesne, and M. Yvinec, “An algorithm for constructing the convex hull of a set of spheres in dimension ,” Computational Geometry, vol. 6, no. 2, pp. 123–130, 1996.
• [12] J.-D. Boissonnat and M. I. Karavelas, “On the combinatorial complexity of euclidean Voronoi cells and convex hulls of -dimensional spheres,” in Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms.   Society for Industrial and Applied Mathematics, 2003, pp. 305–312.
• [13] S. Jambawalikar and P. Kumar, “A note on approximate minimum volume enclosing ellipsoid of ellipsoids,” in Computational Sciences and Its Applications, 2008. ICCSA’08. International Conference on.   IEEE, 2008, pp. 478–487.
• [14] K. Fischer, B. Gärtner, and M. Kutz, “Fast smallest-enclosing-ball computation in high dimensions,” in Algorithms-ESA 2003.   Springer, 2003, pp. 630–641.
• [15] K. Fischer and B. Gärtner, “The smallest enclosing ball of balls: combinatorial structure and algorithms,” International Journal of Computational Geometry & Applications, vol. 14, no. 04n05, pp. 341–378, 2004.
• [16] M. Bădoiu and K. L. Clarkson, “Optimal core-sets for balls,” Computational Geometry, vol. 40, no. 1, pp. 14–22, 2008.
• [17] ——, “Smaller core-sets for balls,” in Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ser. SODA ’03.   Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2003, pp. 801–802. [Online]. Available: http://dl.acm.org/citation.cfm?id=644108.644240
• [18] P. Kumar, J. S. Mitchell, and E. A. Yildirim, “Approximate minimum enclosing balls in high dimensions using core-sets,” Journal of Experimental Algorithmics (JEA), vol. 8, pp. 1–1, 2003.
• [19] J. Mihelic and B. Robic, “Approximation algorithms for the -center problem: An experimental evaluation,” in Selected papers of the International Conference on Operations Research (SOR 2002).   Springer, 2003, p. 371.
• [20] K. Chen, “On coresets for -median and -means clustering in metric and euclidean spaces and their applications,” SIAM Journal on Computing, vol. 39, no. 3, pp. 923–947, 2009.
• [21] F. Nielsen and R. Bathia, “Matrix Information Geometry,” Springer, 2013. http://www.springer.com/fr/book/9783642302312