1 Introduction: Löwner extremal matrices and their applications
Let denote the space of square matrices with realvalued coefficients, and
the matrix vector space
^{3}^{3}3Although addition preserves the symmetric property, beware that the product of two symmetric matrices may be not symmetric. of symmetric matrices. A matrix is said Symmetric Positive Definite [1] (SPD, denoted by ) iff. and only Symmetric Positive SemiDefinite^{4}^{4}4Those definitions extend to Hermitian matrices . (SPSD, denoted by ) when we relax the strict inequality (). Let denote the space of positive semidefinite matrices, and denote the space of positive definite matrices. A matrix is defined by real coefficients, and so is a SPD or a SPSD matrix. Although is a vector space, the SPSD matrix space does not have the vector space structure but is rather an abstract pointed convex cone with apexthe zero matrix
since . Symmetric matrices can be partially ordered using the Löwner ordering:^{5}^{5}5Also often written Loewner in the literature, e.g., see [2]. and . When , matrix is said to dominate matrix , or equivalently that matrix is dominated by matrix . Note that the difference of two SPSD matrices may not be a SPSD matrix.^{6}^{6}6For example, consider and then and . A nonSPSD symmetric matrix can be dominated by a SPSD matrix when .^{7}^{7}7 For example, is dominated by(by taking the absolute values of the eigenvalues of
).The supremum operator is defined on symmetric matrices (not necessarily SPSDs) as follows:
Problem 1 (Löwner maximal matrices)
(1) 
where .
This matrix is indeed the “smallest”, meaning the tightest upper bound, since by definition there does not exist another symmetric matrix dominating all the ’s and dominated by . Trivially, when there exists a matrix that dominates all others of a set , then the supremum of that set is matrix . Similarly, we define the minimal/infimum matrix as the tightest lower bound. Since matrix inversion reverses the Löwner ordering (), we link those extremal supremum/infimum matrices when considering sets of invertible symmetric matrices as follows: . Extremal matrices are rotational invariant , where
is any orthogonal matrix (
). This property is important in DTMRI processing that should be invariant to the chosen reference frame.Computing Löwner extremal matrices are useful in many applications: For example, in matrixvalued imaging [3, 4] (morphological operations, filtering, denoising or image pyramid representations), in formal software verification [5], in statistical inference with domain constraints [6, 7]
, in structure tensor of computer vision
[8] (Förstnerlike operators), etc.This letter is organized as follows: Section 2 explains how to transform the extremal matrix problem into an equivalent geometric minimum enclosing ball of balls. Section 3 presents a fast iterative approximation algorithm that scales well in highdimensions. Section 4 concludes by hinting at further perspectives.
2 Equivalent geometric covering problems
We build on top of [9] to prove that solving the dimensional Löwner maximal matrix amounts to either find (1) the minimal covering Löwner matrix cone (wrt. set containment ) of a corresponding sets of dimensional cones (with ), or (2) the minimal enclosing ball of a set of corresponding dimensional “matrix balls” that we cast into a geometric vector ball covering problem for amenable computations.
2.1 Minimal matrix/vector cone covering problems
Let denote the Löwner ordering cone , and the reverted and translated dominance cone (termed the penumbra cone in [9]) with apex embedded in the space of symmetric matrices that represents all the symmetric matrices dominated by : , where denotes the Minkowski set subtraction operator: (hence, ). A matrix dominates iff. . In plain words, dominates a set of matrices iff. its associated dominance cone covers all the dominance cones for . The dominance cones are “abstract” cones defined in the symmetric matrix space that can be “visualized” as equivalent vector cones in dimension using halfvectorization: For a symmetric matrix , we stack the elements of the lowertriangular matrix part of (with ): . Note that this is not the unique way to halfvectorize symmetric matrices but it is enough for geometric containment purposes. Later, we shall enforce that the norm of vectors matches the Fröbenius matrix norm .
Let denotes the vectorized matrix Löwner ordering cone: , and denote the vector dominance cone: . Next, we further transform this minimum dimensional matrix/vector cone covering problems as equivalent Minimum Enclosing Ball (MEB) problems of dimensional matrix/vector balls.
2.2 Minimum enclosing ball of ball problems
A basis of a convex cone anchored at the origin is a convex subset so that there exists a unique decomposition: with and . For example, is a basis of the Löwner cone . Informally speaking, a basis of a cone can be interpreted as a compact crosssection of the cone. The Löwner cone is a smooth convex cone with its interior denoting the space of positive definite matrices (full rank matrices), and its border the rankdeficient symmetric positive semidefinite matrices (with apex the zero matrix of rank ). A point is an extreme element of a convex set iff. remains convex. It follows from Minkowski theorem that every compact convex set in a finitedimensional vector space can be reconstructed as convex combinations of its extreme points : That is, the compact convex set is the closed convex hull of its extreme points.
A face of a closed cone is a subcone such that . The dimensional faces are the extremal rays of the cone. The basis of the Löwner ordering cone is [10] . Other rankdeficient or full rank matrices can be constructed by convex combinations of these rank matrices, the extremal rays.
For any square matrix , the trace operator is defined by , the sum of the diagonal elements of the matrix. The trace also amounts to the sum of the eigenvalues of matrix : . The basis of a dominance cone is . Note that all the basis of the dominance cones lie in the subspace of symmetric matrices with zero trace. Let denote the matrix inner product and the matrix Fröbenius norm. Two matrices and are orthogonal (or perpendicular) iff.
. It can be checked that the identity matrix
is perpendicular to any zerotrace matrix since . The center of the ball basis of the dominance cone is obtained as the orthogonal projection of onto the zerotrace subspace : . The dominance cone basis is a matrix ball since for any rank matrix with (an extreme point), we have the radius:(2) 
that is nonnegative since we assumed that . Reciprocally, to a basis ball , we can associate the apex of its corresponding dominance cone : . Figure 1 illustrates the notations and the representation of a cone by its corresponding basis and apex. Thus we associate to each dominance cone its corresponding ball basis on the subspace of zero trace matrices: , . We have the following containment relationships: and
Finally, we transform this minimum enclosing matrix ball problem into a minimum enclosing vector ball problem using a halfvectorization that preserves the notion of distances, i.e., using an isomorphism between the space of symmetric matrices and the space of halfvectorized matrices. The norm of the vectorized matrix should match the matrix Fröbenius norm: . Since , it follows that . We can convert back a vector into a corresponding symmetric matrix.
Since we have considered all dominance cones with basis rooted on in order to compute the ball basis as orthogonal projections, we need to preprocess the symmetric matrices to ensure that property as follows: Let denote the minimal trace of the input set of symmetric matrices , and define for where denotes the identity matrix. Recall that . By construction, the transformed input set satisfies . Furthermore, observe that iff. where , so that .
As a side note, let us point out that the reverse basisspheretocone mapping has been used to compute the convex hull of dimensional spheres (convex homothets) from the convex hull of dimensional equivalent points [11, 12].
Finally, let us notice that there are severals ways to majorize/minorize matrices: For example, once can seek extremal matrices that are invariant up to an invertible transformation [5], a stronger requirement than the invariance by orthogonal transformation. In the latter case, it amounts to geometrically compute the Minimum Volume Enclosing Ellipsoid of Ellipsoids (MVEEE) [5, 13].
2.3 Defining approximations of
First, let us summarize the algorithm for computing the Löwner maximal matrix of a set of symmetric matrices as follows:

Normalize matrices so that they have all nonnegative traces:

Compute the vector ball representations of the dominance cones:
with
and

Compute the small(est) enclosing ball of basis balls (either exactly or an approximation):

Convert back the small(est) enclosing ball to the dominance cone, and recover its apex :

Adjust back the matrix trace:
Computing exactly the extremal Löwner matrices suffer from the curse of dimensionality of computing MEBs [14]. In [9], Burgeth et al. proceed by discretizing the basis spheres by sampling^{8}^{8}8In 2D, we sample for . In 3D, we use spherical coordinates for and . the extreme x points for . This yields an approximation term, requires more computation, and even worse the method does not scale [15] in highdimensions. Thus in order to handle highdimensional matrices met in software formal verification [5] or in computer vision (structure tensor [8]), we consider approximation of the extremal Löwner matrices. The notion of tightness of approximation of (the epsilon) is imported straightforwardly from the definition of the tightness of the geometric covering problems. A approximation of is a matrix such that: . It follows from Eq. 2 that a approximation satisfies .
We present a fast guaranteed approximation algorithm for approximating the minimum enclosing ball of a set of balls (or more generally, for sets of compact geometric objects).
3 Approximating the minimum enclosing ball of objects and balls
We extend the incremental algorithm of Bădoiu and Clarkson [16] (BC) designed for finite point sets to ball sets or compact object sets that work in large dimensions. Let denote a set of balls. For an object and a query point , denote by the farthest distance from to : , and let denote the farthest point of from . The generalized BC [16] algorithm for approximating the circumcenter of the minimum volume enclosing ball of objects (MVBO) is summarized as follows:

Let and .

Repeat times:

Find the farthest object to current center:

Update the circumcenter:

.

When considering balls as objects, the farthest distance of a point to a ball is , and the circumcenter updating rule is: . See Figure 2 and online video^{9}^{9}9https://www.youtube.com/watch?v=w1ULgGAK6vc for an illustration. (MVBO can also be used to approximate the MEB of ellipsoids.) It is proved in [17] that at iteration , we have where is the unique smallest enclosing ball. Hence the radius of the ball centered at is bounded by . To get a approximation, we need iterations.s It follows that a approximation of the smallest enclosing ball of dimensional balls can be computed in time [17], and since we get:
Theorem 1
The Löwner maximal matrix of a set of dimensional symmetric matrices can be approximated by a matrix such that in time.
Interestingly, this shows that the approximation of Löwner supremum matrices admits coresets [17], the subset of farthest balls chosen during the iterations, so that with . See [18] for other MEB approximation algorithms.
To a symmetric matrix , we associate a quadratic form that is a strictly convex function when is PSD. Therefore, we may visualize the SPSD matrices in 2D/3D as ellipsoids (potentially degenerated flat ellipsoids for rankdeficient matrices). More precisely, we associate to each positive definite matrix , a geometric ellipsoid defined by , where is a prescribed constant (usually set to , Figure 3). From the SVD decomposition of , we recover the rotation matrix, and the semiradii of the ellipsoid are the square root eigenvalues . It follows that . To handle degenerate flat ellipsoids that are not fully dimensional (rankdeficient matrix ), we define
. Note that those ellipsoids are all centered at the origin, and may also conceptually be thought as centered Gaussian distributions (or covariance matrices denoting the concentration ellipsoids of estimators
[2] in statistics). We can also visualize the Löwner ordering cone and dominance cones for matrices embedded in the vectorized 3D space of symmetric matrices (Figure 3), and the corresponding halfvectorized ball basis (Figure 3).(a)  (b)  (c) 
4 Concluding remarks
Our novel extremal matrix approximation method allows one to leverage further related results related to coresets [16] for dealing with highdimensional extremal matrices. For example, we may consider clustering PSD matrices with respect to Löwner order and use the center clustering technique with guaranteed approximation [19, 20]. A Java™ code of our method is available for reproducible research.
Acknowledgements
This work was carried out during the Matrix Information Geometry (MIG) workshop [21], organized at École Polytechnique, France in February 2011 (https://www.sonycsl.co.jp/person/nielsen/infogeo/MIG/). Frank Nielsen dedicates this work to the memory of his late father Gudmund Liebach Nielsen who passed away during the last day of the workshop.
References
 [1] R. Bhatia, Positive definite matrices. Princeton university press, 2009.
 [2] M. Siotani, “Some applications of Loewner’s ordering on symmetric matrices,” Annals of the Institute of Statistical Mathematics, vol. 19, no. 1, pp. 245–259, 1967.
 [3] J. Angulo, “Supremum/infimum and nonlinear averaging of positive definite symmetric matrices,” Matrix Information Geometry, pp. 3–33, 2013.
 [4] B. Burgeth, A. Bruhn, N. Papenberg, M. Welk, and J. Weickert, “Mathematical morphology for matrix fields induced by the Loewner ordering in higher dimensions,” Signal Processing, vol. 87, 2007.
 [5] X. Allamigeon, S. Gaubert, E. Goubault, S. Putot, and N. Stott, “A scalable algebraic method to infer quadratic invariants of switched systems,” in Embedded Software (EMSOFT), 2015 International Conference on, Oct 2015, pp. 75–84.

[6]
J. A. Calvin and R. L. Dykstra, “Maximum likelihood estimation of a set of covariance matrices under Löwner order restrictions with applications to balanced multivariate variance components models,”
The Annals of Statistics, pp. 850–869, 1991. 
[7]
M.T. Tsai, “Maximum likelihood estimation of Wishart mean matrices under
Löwner order restrictions,”
Journal of Multivariate Analysis
, vol. 98, no. 5, pp. 932–944, 2007.  [8] W. Förstner, “A Feature Based Correspondence Algorithm for Image Matching,” Int. Arch. of Photogrammetry and Remote Sensing, vol. 26, no. 3, pp. 150–166, 1986.
 [9] B. Burgeth, A. Bruhn, S. Didas, J. Weickert, and M. Welk, “Morphology for matrix data: Ordering versus PDEbased approach,” Image and Vision Computing, vol. 25, no. 4, pp. 496–511, 2007.
 [10] R. D. Hill and S. R. Waters, “On the cone of positive semidefinite matrices,” Linear Algebra and its Applications, vol. 90, pp. 81–88, 1987.
 [11] J.D. Boissonnat, A. Cérézo, O. Devillers, J. Duquesne, and M. Yvinec, “An algorithm for constructing the convex hull of a set of spheres in dimension ,” Computational Geometry, vol. 6, no. 2, pp. 123–130, 1996.
 [12] J.D. Boissonnat and M. I. Karavelas, “On the combinatorial complexity of euclidean Voronoi cells and convex hulls of dimensional spheres,” in Proceedings of the fourteenth annual ACMSIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2003, pp. 305–312.
 [13] S. Jambawalikar and P. Kumar, “A note on approximate minimum volume enclosing ellipsoid of ellipsoids,” in Computational Sciences and Its Applications, 2008. ICCSA’08. International Conference on. IEEE, 2008, pp. 478–487.
 [14] K. Fischer, B. Gärtner, and M. Kutz, “Fast smallestenclosingball computation in high dimensions,” in AlgorithmsESA 2003. Springer, 2003, pp. 630–641.
 [15] K. Fischer and B. Gärtner, “The smallest enclosing ball of balls: combinatorial structure and algorithms,” International Journal of Computational Geometry & Applications, vol. 14, no. 04n05, pp. 341–378, 2004.
 [16] M. Bădoiu and K. L. Clarkson, “Optimal coresets for balls,” Computational Geometry, vol. 40, no. 1, pp. 14–22, 2008.
 [17] ——, “Smaller coresets for balls,” in Proceedings of the Fourteenth Annual ACMSIAM Symposium on Discrete Algorithms, ser. SODA ’03. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2003, pp. 801–802. [Online]. Available: http://dl.acm.org/citation.cfm?id=644108.644240
 [18] P. Kumar, J. S. Mitchell, and E. A. Yildirim, “Approximate minimum enclosing balls in high dimensions using coresets,” Journal of Experimental Algorithmics (JEA), vol. 8, pp. 1–1, 2003.
 [19] J. Mihelic and B. Robic, “Approximation algorithms for the center problem: An experimental evaluation,” in Selected papers of the International Conference on Operations Research (SOR 2002). Springer, 2003, p. 371.
 [20] K. Chen, “On coresets for median and means clustering in metric and euclidean spaces and their applications,” SIAM Journal on Computing, vol. 39, no. 3, pp. 923–947, 2009.
 [21] F. Nielsen and R. Bathia, “Matrix Information Geometry,” Springer, 2013. http://www.springer.com/fr/book/9783642302312
Comments
There are no comments yet.