Brownian motion tree models are toric

02/26/2019 ∙ by Bernd Sturmfels, et al. ∙ MIT Max Planck Society Universitat Pompeu Fabra 0

Felsenstein's classical model for Gaussian distributions on a phylogenetic tree is shown to be a toric variety in the space of concentration matrices. We present an exact semialgebraic characterization of this model, and we demonstrate how the toric structure leads to exact methods for maximum likelihood estimation. Our results also give new insights into the geometry of ultrametric matrices.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Brownian motion tree models are classical statistical models for phylogenetic trees. They were introduced by Felsenstein [Fel73]

to examine continuous measurements of phenotypes in evolutionary biology. The vertices of the tree represent real-valued random variables, whose joint distribution obeys a Gaussian law.

Let be a tree with leaves, labelled , and with no vertices of degree two. Let be the rooted tree obtained from by directing all edges away from . The set of non-root vertices of is in natural bijection with the set of edges of . A vertex is a descendant of if there is a directed path from to . The set of all leaves of that are descendants of is denoted by . We fix a total order on such that if . Given , we write for their most recent common ancestor. Figure 1 shows our running example.

Figure 1. A tree with leaves, edges, and its matrix representation.

In the space of symmetric matrices we consider the subspace

Using parameters , the matrices in satisfy for . This furnishes a representation of the tree by a matrix, as shown in Figure 1.

We are interested in Gaussian distributions on with covariance matrix in . Their concentration matrices form the -dimensional algebraic variety

We identify with its Zariski closure in the projective space . In this paper we show that the variety is linearly isomorphic to a toric variety in . In tropical geometry [MS15, Remark 4.3.11] and algebraic combinatorics [BFF18, Theorem 4.6], one associates a toric ideal with the unrooted tree as follows. The ideal has the quadratic generators where and are cherries in the induced -leaf subtree on any quadruple .

To reveal the toric structure, we introduce a change of coordinates in  as follows:

(1)

With this, the concentration matrix is the reduced Laplacian of the complete graph on vertices with edge labels . See [MSUZ16, Example 4.9], where the matrix for is shown in equation (4.6). Here is the same scenario for :

Example 1.1.

We fix coordinates on by setting

Fix the tree in Figure 1. The -dimensional toric variety in is defined by

These quadrics vanish for the inverse of any matrix with the structure in Figure 1. ∎

The title of this paper is an abridged version of the following statement:

Theorem 1.2.

The variety of concentration matrices in the Brownian motion tree model, in coordinates (1), coincides with the toric variety defined by the ideal .

The proof of this theorem will be given in Section 3. First, however, in Section 2, we offer an introduction to the statistical model and its phylogenetic applications. Our statistical models correspond to semialgebraic subsets of or . We are interested in two subsets of , namely the spectrahedron , obtained by intersection with the cone of positive definite matrices, and the polyhedral cone

We shall see that is a simplicial cone, contained in the spectrahedron .

Matrices in play an important role in statistics. By Proposition 3.14 in [DMSM14], every matrix in is an ultrametric matrix in , i.e. it satisfies for all . By Theorem 3.16, every ultrametric matrix lies in for some tree

. Ultrametric matrices appear in the potential theory of finite state Markov chains, which is the context of

[DMSM14]. Our motivation came from phylogenetics [Fel73] and Gaussian maximum likelihood estimation [ZUR17].

Every matrix in represents a Gaussian distribution on . Both and belong to the class of linear Gaussian covariance models [And70, ZUR17].

The main result of this paper is Theorem 2.6. This is an extension of Theorem 1.2 which features toric inequalities in addition to the quadratic binomial equations in . It offers an exact semialgebraic description of the model in terms of the nonnegative coordinates . The proof of this result is presented in Section 5. It rests on formulas that express in terms of treks as in [STD10].

Section 4 is about fitting Brownian motion tree models to data, given by a sample covariance matrix in . We do so by maximizing the log-likelihood function

(2)

This function is non-convex. The expression in terms of equals

(3)

This function is convex in , which motivates analyzing maximum likelihood estimation for Brownian motion tree models as an optimization problem over . As we will show in Section 4, in this parameterization maximum likelihood estimation boils down to solving a system of polynomial equations on . The paper concludes with a brief discussion on how Theorem 2.6 might be applied to likelihood inference.

2. Tree models and their parameters

Brownian motion is a stochastic process that characterizes the random motion of particles. It is a Wiener process satisfying , with independent increments, and such that for

has a Gaussian distribution with mean zero and variance

. Brownian motion on a rooted binary tree can also be described using the Wiener process. The process starts at vertex . At time , it splits into two, and each of the two processes starts evolving independently at value . It again proceeds according to the Wiener process until another splitting event occurs. We think about this process as evolving along , where the parameters for inner vertices represent the times of splitting events. This construction is a continuous interpretation of the Gaussian structural equation model (4) discussed next.

Given a rooted tree , we define a Gaussian distribution on as follows. First, set . Then to each vertex we associate independently a Gaussian random variable with mean zero and variance . The corresponding Markov process on is a collection of real-valued random variables for . They satisfy

(4)

Since a linear transformation of a Gaussian vector is also Gaussian, we conclude that the random vector

is Gaussian. The set of covariance matrices of the marginal distributions on the leaf-variables is the polyhedral cone .

Proposition 2.1.

The random vector

is normally distributed with mean zero, and the entries

of its covariance matrix are

(5)

The resulting Gaussians on are precisely those with covariance matrices in .

Proof..

Using (4) recursively, we can write each in terms of the error terms as

Equation (5) follows from this and the fact that all ’s are mutually independent. The linear inequalities that define the polyhedral cone inside the linear space are equivalent to the requirement that the ’s be nonnegative. ∎

Example 2.2.

Consider the tree in Figure 1. The random variables for the inner vertices of the tree are , , , , and we have for the leaves.

The are independent univariate Gaussians with mean and variance . Hence the marginal distribution of is Gaussian with the covariance matrix

(6)

This is the matrix in (5) and in Figure 1. The constraint that the are nonnegative translates into the inequalities and and . ∎

The extreme rays of the polyhedral cone are as follows. Let be the vector with if and otherwise. The corresponding rank one matrices form a basis for . In fact, the matrix in (5) equals

(7)
Corollary 2.3.

The cone is a simplicial cone, spanned by the rank one matrices associated with vertices . It is contained in the spectrahedral cone .

Note that this inclusion is strict. For instance, the matrix in (6) is positive definite if we set , and . This means that the linear covariance model is strictly larger than the Brownian motion tree model.

We next interpret our model in the context of distance-based phylogenetics. Using the natural bijection between non-root vertices and edges, we label each edge of with a parameter . This is shown in the tree on the right in Figure 1. We think of as the length of the associated edge. We compute the distance between any two leaves of by summing the lengths of edges on the unique path joining them. The collection of resulting distances for is a tree metric on .

The correspondence between ultrametric matrices and tree metrics on taxa is known in phylogenetics as the Farris transform. The formulas are

and these are equivalent to (5). The inverse of the Farris transform is given by

Proposition 2.4.

The model is identified with the cone of tree metrics on via the Farris transform . The parameters are the lengths of the edges.

Proof..

The diagonal entry of the covariance matrix is the sum of the lengths of the incoming edges for all vertices on the path from the root to leaf . Therefore, is the distance from to in the unrooted tree . Each off-diagonal entry is the length of the path from the root to . Hence is the length of the path from to the leaf . We conclude that is the length of the path from leaf to leaf in . Since the Farris transform is an invertible linear transformation, it identifies the two simplicial cones in . ∎

We next turn to the space of all tree metrics, which is a key object in phylogenetics. A classical result of Buneman [Bun71] states that a metric on is a tree metric (for some tree) if and only if it satisfies the four point condition:

(8)

If is a tree metric on then the following additional equation holds:

(9)

The constraints (8) and (9) are well-known also in tropical geometry [MS15, §4.3] where one identifies the space of tree metrics with the tropical Grassmannian that parametrizes tropical lines in . This is related to Theorem 1.2 as follows.

Remark 2.5.

If we set then the linear relations (9) that hold for tree metrics on are precisely the equations that define the toric ideal .

We now state our main result. It augments Theorem 1.2 by incorporating the inequalities in (8). The unrooted tree obtained from by restricting to any four leaves is called a quartet of . If equality holds in (8) then this four-leaf tree is a star quartet. If the inequality in (8) is strict then we call it a trivalent quartet.

Theorem 2.6.

Given any rooted tree , the set of concentration matrices in the Brownian motion tree model is the set of positive definite matrices satisfying

(10)
Remark 2.7.

These inequalities are satisfied by where is any tree metric on . Thus, the set of models , where ranges over all rooted trees on leaves, is a multiplicative realization of the space of phylogenetic trees. This is reminiscent of the space of phylogenetic oranges studied by Moulton and Steel [MS04].

We illustrate the contents of Theorem 2.6 for our running example.

Example 2.8.

Fix the tree in Figure 1 with covariance matrix in (6). Set . Writing the concentration matrix as in Example 1.1, we have

The five quadratic binomials in are zero for these . Assuming this, Theorem 2.6 says that these expressions are nonnegative if and only if .

3. Toric ideals from trees

In this section we prove Theorem 1.2. The proof of Theorem 2.6 is given in Section 5. The following code in Macaulay2 [M2] provides the quadratic generators for our running example. It also shows that the rooted tree need not be binary.

Example 3.1.

Example 1.1 can be verified in Macaulay2 [M2] by running this code:

R = QQ[t1,t2,t3,t4,t5,t6,t7,p01,p02,p03,p04,p12,p13,p14,p23,p24,p34];
S = matrix {{t1,t5,t7,t7},
            {t5,t2,t7,t7},
            {t7,t7,t3,t6},
            {t7,t7,t6,t4}};
K = matrix {{p01+p12+p13+p14, -p12, -p13, -p14},
            {-p12, p02+p12+p23+p24, -p23, -p24},
            {-p13, -p23, p03+p13+p23+p34, -p34},
            {-p14, -p24, -p34, p04+p14+p24+p34}};
id4 = matrix {{1,0,0,0},{0,1,0,0},{0,0,1,0},{0,0,0,1}};
I = eliminate({t1,t2,t3,t4,t5,t6,t7},minors(1,S*K-id4))
codim I, degree I, betti mingens I

As claimed, the toric ideal has codimension , degree and five quadratic generators.

We now examine non-binary trees. First we replace the two occurrences of t6 by t7 in the covariance matrix S. The resulting tree has . By running the modified Macaulay2 code, we see that the ideal is still toric. It has codimension , degree and 7 quadratic generators. Finally, we replace both t5 and t6 with t7. Now the unrooted tree has . It is the star tree with leaves . Its toric ideal is the ideal of the second hypersimplex. It has codimension and degree , with 10 quadratic generators. Modifying the code confirms these data. ∎

Proof of Theorem 1.2.

We use the following parametric representation for the toric variety of the ideal associated with the unrooted tree . It is given by Laurent monomials in the entries of the matrix representation of the rooted tree :

(11)

The ideal is the kernel of the ring homomorphism given by (11).

The variety is a cone in given parametrically by mapping a covariance matrix to its inverse . Since the parametrization is homogeneous, we may replace the inverse by the adjoint. By slight abuse of notation we set . The entries of the matrix are homogeneous polynomials of degree in the parameters for . The same holds for the coordinates in (1). We write for these homogeneous polynomials. Our claim states that the toric ideal coincides with the kernel of the ring homomorphism .

To prove this, we examine the initial monomials and the irreducible factorization of the polynomials . Here we fix the degree reverse lexicographic order on given by if in . For , the polynomial is equal (up to sign) to the determinant of the submatrix of that is obtained by deleting row and column . The initial monomial is the product of the entries of that submatrix which appear along the main diagonal. To be precise, we find

The polynomial is the determinant of the matrix obtained from  by replacing the th row with the all-ones vector . Its initial monomial equals

Hence, by (11), the relations among the initial monomials are precisely given by . We claim that each of the quadratic binomial relations among the above Laurent monomials lifts to exactly the same relation among the full polynomials and . We shall prove this by examining the factorizations of these polynomials.

In what follows we first assume that is a binary tree, i.e. every vertex in has precisely two children in . At the end of the proof, we shall derive Theorem 1.2 for non-binary trees from the same statement for binary trees.

For any inner vertex in the rooted binary tree , let denote the rooted tree obtained from by deleting all edges and vertices below . Thus is a rooted tree with leaves . Let denote the determinant of its covariance matrix. This is a homogeneous polynomial of degree . For any directed edge of the tree , we consider the submatrix of with row indices and column indices , for any fixed . This matrix does not depend on , and it has one more column than rows. We make it square by placing the all-ones vector into the first row. We write for the determinant of that square matrix. This is a homogeneous polynomial in of degree . By convention, for the root edge .

Consider the path between any two leaves and in the unrooted tree . Each vertex in the interior of such a path has a unique child in the rooted tree that is not on the path. Here we are using the assumption that is a binary tree. The only exception is the top vertex on the path between and in .

We find that the polynomial is equal to the product of all determinants where is any edge on the path from to . Similarly, the determinant is equal to times the product of all where the vertex is on the path from leaf to leaf . One verifies this by examining for which parameter values these expressions vanish, and by noting that the initial monomials coincide with the products of the initial monomials of the factors:

The above factorizations of and into the determinants and show that each generator of vanishes on our variety. By our analysis of the leading monomials, there are no relations among the polynomials and beyond those in . In fact, our analysis shows that these polynomials form a Khovanskii basis (cf. [KM19]) for the reverse lexicographic monomial order on the .

We now know that Theorem 1.2 holds for all binary trees. It remains to derive from this the same statement for all non-binary trees. The property for rooted trees to be binary translates into the property for unrooted trees to be trivalent. Let be any non-trivalent tree and let be the set of all trivalent trees that are obtained by refining . One verifies that the following identity among toric ideals holds:

(12)

Similarly, the linear space is the intersection of all the linear spaces , where runs over . Since matrix inversion is a birational isomorphism, the variety is the intersection of the toric varieties where runs over the trivalent trees in . The Nullstellensatz implies that the sum of toric ideals (12) cuts out set-theoretically. This shows that is a toric variety, with toric ideal in (12). ∎

Example 3.2.

Consider the binary tree in Figure 1 and Examples 1.1 and 3.1. The special determinants defined above are the following polynomials:

We are interested in the projective variety in that is parametrized by

One verifies that this is the variety defined by the toric ideal seen in Example 1.1. Furthermore, the same toric variety is also parametrized by the initial monomials , and . ∎

Remark 3.3.

Tropical geometers know that the toric ideals are precisely the monomial-free initial ideals of the Plücker ideal that defines the Grassmannian of lines. The latter arises in a manner that is similar to our passage from covariance matrices to concentration matrices, namely by inverting matrices that have a Hankel structure. This is the content of [MSUZ16, Proposition 7.2]. We do not know whether this is related to the present paper. Is it possible to derive Theorem 1.2 by a degeneration argument from the relationship between Hankel matrices and Bézout matrices ?

4. Maximum likelihood algebra

The log-likelihood function for Gaussian random variables is the function in (2). Here is a fixed sample covariance matrix, i.e.  where is a real matrix whose columns are the observed samples. Maximum likelihood estimation is concerned with maximizing the expression (2) over all covariance matrices in the model of interest. This optimization problem is equivalent to maximizing the expression (3) over all concentration matrices in the model.

The optimal solution to this problem is denoted by or . This is called the maximum likelihood estimate (MLE) for the data . Here the model is fixed but the data can vary. We therefore think of the MLE as a function of .

In this section we study the MLE for the Brownian motion tree model . The idea is to take advantage of the toric structure revealed in Theorem 1.2. Thus, we use the coordinate change (1) that writes the concentration matrix as the reduced Laplacian for the complete graph on vertices with edge labels . With this, the expression (3) is a function of the , subject to the toric constraints in . This gives us the flexibility to choose a convenient parametrization of the toric variety.

In algebraic statistics, one distinguishes two kinds of polynomial constraints for a statistical model, namely equations and inequalities. It is customary to first focus on the equations and examine the MLE in that setting before incorporating inequalities.

In our paper, the model is given by the semialgebraic set . This set satisfies the inequalities in Theorem 2.6. For the discussion of MLE in the current section, we ignore the inequality constraints and identify the set with its Zariski closure, which is the toric variety . The critical points of the likelihood function on that variety are defined by a system of polynomial equations, known in statistics as the likelihood equations. These can be derived by using Lagrange multipliers, or via a monomial parametrization of the toric variety .

The maximum likelihood degree of the model is, by definition, the number of complex solutions to the likelihood equations for generic data . This number is an algebraic invariant of the ideal . To compute it we take to be a general symmetric matrix of full rank and we count all complex critical points of .

Proposition 4.1.

The maximum likelihood degree of the Brownian motion tree model on a binary tree with leaves is equal to .

Proof..

This result was found by symbolic computation, namely using the Gröbner basis package in the computer algebra system maple. For the computation was carried out over a finite field. All combinatorial types of trees were considered. See Example 4.4 for an illustration of the case where the ML degree is . ∎

This result is complementary to the usual approach in computational statistics where one maximizes the likelihood function using a local numerical method, such as the Newton-Raphson algorithm. Local methods perform best in a regime where the likelihood function is concave. Such a regime was identified in [ZUR17]

, where concavity was shown to hold with high probability when the dimension

is small relative to the sample size . In that analysis it was essential to use all constraints of the model, i.e., not just the equations but also the inequalities.

The maximum likelihood degree being equal to one means that the MLE can be written as a rational function of the data. Proposition 4.1 says that this happens for our model when and . We next present the formulas for these two cases.

Example 4.2 ().

The toric ideal equals , so our model is the full Gaussian family. This means that the MLE equals the sample covariance matrix:

Since the MLE of the parameters is , , , this leads to valid parameters for the Brownian motion tree model if . ∎

Example 4.3 ().

We label the rooted tree so that is a clade. Hence and are the cherries in the unrooted tree . Our toric ideal is principal:

This is equivalent to setting in the covariance matrix . The MLE is a rational function of the entries of the sample covariance matrix . We define

The entries of the estimated covariance matrix satisfy and

The remaining two matrix entries must be equal:

The following two linear forms are preserved when passing from data to MLE:

Writing for the sample concentration matrix, we note that is a rank matrix which depends only on , , and . Also,

Example 4.4 ().

We consider the tree in Figure 1. Its toric variety was discussed in Examples 1.1, 3.1 and 3.2. We shall prove that the MLE for this model cannot be expressed in radicals. For this, we fix the parametrization

(13)