Non-bifurcating phylogenetic tree inference via the adaptive LASSO

05/28/2018 ∙ by Cheng Zhang, et al. ∙ 0

Phylogenetic tree inference using deep DNA sequencing is reshaping our understanding of rapidly evolving systems, such as the within-host battle between viruses and the immune system. Densely sampled phylogenetic trees can contain special features, including "sampled ancestors" in which we sequence a genotype along with its direct descendants, and "polytomies" in which multiple descendants arise simultaneously. These features are apparent after identifying zero-length branches in the tree. However, current maximum-likelihood based approaches are not capable of revealing such zero-length branches. In this paper, we find these zero-length branches by introducing adaptive-LASSO-type regularization estimators to phylogenetics, deriving their properties, and showing regularization to be a practically useful approach for phylogenetics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Phylogenetic methods, originally developed to infer evolutionary relationships among species separated by millions of years, are now widely used in biomedicine to investigate very short-time-scale evolutionary history. For example, mutations in viral genomes can inform us about patterns of infection and evolutionary dynamics as they evolve in their hosts on a time-scale of years (Grenfell et al., 2004). Antibody-making B cells diversify in just a few weeks, with a mutation rate around a million times higher than the typical mutation rate for cell division (Kleinstein et al., 2003). Although general-purpose phylogenetic methods have proven useful in these biomedical settings, the basic assumption that evolutionary trees follow a bifurcating pattern need not hold. Our goal is to develop a penalized maximum-likelihood approach to infer non-bifurcating trees (Figure 1).

Although our practical interests concern inference for finite-length sequence data, some situations in biology will lead to non-bifurcating phylogenetic trees, even in the theoretical limit of infinite sequence information. For example, a retrovirus such as HIV incorporates a copy of its genetic material into the host cell upon infection. This genetic material is then used for many copies of the virus, and when more than two descendants from this infected cell are then sampled for sequencing, the correct phylogenetic tree forms a multifurcation from these multiple descendants (a.k.a. a polytomy). In other situations we may sample an ancestor along with a descendant cell, which will appear as a node with a single descendant edge (Figure 1). For example, antibody-making B cells evolve within host in dense accretions of cells called germinal centers in order to better bind foreign molecules (Victora and Nussenzweig, 2012). In such settings it is possible to sample a cell along with its direct descendant. Indeed, upon DNA replication in cell division, one cell inherits the original DNA of the coding strand, while the other inherits a copy which may contain a mutation from the original. If we sequence both of these cells, the first cell is the genetic ancestor of the second cell for this coding region. In this case the correct configuration of the two genotypes is that the first cell is a sampled ancestor of the second cell.

Figure 1. (a) A cartoon evolutionary scenario, with sampled ancestors (gray dots) and a multifurcation (dashed box). (b) A corresponding standard maximum likelihood phylogenetic inference, without regularization or thresholding.

However DNA sequences are finite and often rather short, limiting the amount of information available with which to infer phylogenetic trees. Even though entire genomes are large, the segment of interest for a phylogenetic analysis is frequently small. For example, B cells evolve rapidly only in the hundreds of DNA sites used to encode antibodies, and thus sequencing is typically applied only to this region (Georgiou et al., 2014). Similarly, modern applications of pathogen outbreak analysis using sequencing (Gardy et al., 2015) frequently observe the same sequence, indicating that sampling is dense relative to mutation rates. Because genetic recombination and processes such as viral reassortment (Chen and Holmes, 2008) break the assumption that genetic data has evolved according to a single tree, practitioners often restrict analysis to an even shorter region that they believe has evolved according to a single process.

Inference on these shorter sequences further motivates correct inferences for non-bifurcating tree inference. Indeed, even if a collection of sequences in fact did diverge in a bifurcating fashion, if no mutations happened in the sequenced region during this diversification (i.e. a zero-length branch) then a non-bifurcating representation is appropriate. We thus expect multifurcations and sampled ancestors whenever the interval between the bifurcations is short compared to the total mutation rate in the sequenced region.

Non-bifurcating tree inference has thus far been via Bayesian phylogenetics, with the two deviations from bifurcation in two separate lines of work. For multifurcations, Lewis et al. (2005, 2015) develop a prior on phylogenetic trees with positive mass on multifurcating trees, and then perform tree estimation using reversible jump MCMC (rjMCMC) moves between trees. For sampled ancestors, Gavryushkina et al. (2014, 2016) introduce a prior on trees with sampled ancestors and then also use rjMCMC for inference. To our knowledge no priors have been defined that place mass on trees with both multifurcations or sampled ancestors.

Current biomedical applications require a more computationally efficient alternative than these Bayesian techniques. Indeed, current methods for real-time phylogenetics in the course of a viral outbreak use maximum likelihood (Neher and Bedford, 2015; Libin et al., 2017), which is orders of magnitude faster than Bayesian analyses. This is essential because the time between new sequences being added to the database is shorter than the required execution time for a Bayesian analysis. However, to our knowledge a maximum-likelihood alternative to such rjMCMC phylogenetic inference for multifurcating trees does not yet exist.

Elsewhere in statistics, researchers find zero sets of parameters via penalized maximum likelihood inference, commonly maximizing the sum of a penalty term and a log likelihood function. When the penalty term has a nonzero slope as each variable approaches zero, the penalty will have the effect of “shrinking” that variable to zero when there is not substantial evidence from the likelihood function that it should be nonzero. There is now a substantial literature on such estimators, of which penalized estimators such as LASSO (Tibshirani, 1996) are the most popular.

In this paper, we introduce such regularization estimators into phylogenetics, derive their properties, and show regularization to be a practically-useful approach for phylogenetics via new algorithms and experiments. Specifically, we first show consistency: that the LASSO and its adaptive variants find all zero-length branches in the limit of long sequences with an appropriate penalty weight. We also derive new algorithms for phylogenetic LASSO and show them to be effective via simulation experiments and application to a Dengue virus data set.

Phylogenetic LASSO is challenging and requires additional new techniques above those for classical LASSO. First, the phylogenetic log-likelihood function is non-linear and non-convex. More importantly, unlike the standard settings for model selection where the variables can receive both positive and negative values, the branch lengths of a tree are non-negative. Thus, the objective function of phylogenetic LASSO can only be defined on a constrained compact space, for which the “true parameter” lies on the boundary of the domain. Furthermore the behavior of the phylogenetic log-likelihood on this boundary is untamed: when multiple branch lengths of a tree approach zero at the same time, the log-likelihood function may diverge to infinity, even if it is analytic in the inside of the domain of definition. The geometry of the subset of the boundary where these singularities happen is non-trivial, especially in the presence of randomness in data. All of these issues combine to make theoretical analyses and practical implementation of these estimators an interesting challenge.

2. Mathematical framework

2.1. Phylogenetic tree

A phylogenetic tree is a tree graph such that each leaf has a unique name, and such that each edge of the tree is associated with a non-negative number . We will denote by and the set of edges and vertices of the tree, respectively. We will refer to and as the tree topology and the vector of branch lengths, respectively. Any edge adjacent to a leaf is called an pendant edge, and any other edge is called an internal edge. A pendant edge with zero branch length leads to a sampled ancestor while an internal edge with zero branch length is part of a polytomy.

Throughout this paper, we assume that the topology of the tree is known and we are interested in reconstructing the vector of branch lengths. We allow the individual branch length to be zero, which enables us to consider non-bifurcating trees. Since the tree topology is fixed, the tree is completely represented by the vector of branch lengths . We will consider the set of all phylogenetic trees with topology and branch lengths bounded from above by some . This arbitrary upper bound on branch lengths is for mathematical convenience and does not represent a real constraint for the short-term evolutionary setting of interest here.

2.2. Phylogenetic likelihood

We will follow the most common setting for likelihood-based phylogenetics: a reversible continuous-time Markov chain model of substitution which is IID across sites. Briefly, let

denote the set of states and let ; for convenience, we assume the states have indices to . We assume that mutation events occur according to a continuous time Markov chain on states

. Specifically, the probability of ending in state

after time given that the site started in state is given by the -th entry of , where is the matrix valued function , and the matrix is the instantaneous rate matrix of the evolutionary model. The branch length of a given edge represents the time during which the mutation process operates. We assume that the rate matrix is reversible with respect to a stationary distribution on the set of states .

We will use the term state assignment to refer to a single-site labelling of the leaf of tree by characters in . For a fixed vector of branch lengths , the phylogenetic likelihood is defined as follows and will be denoted by . Let be the observed sequences (with characters in ) of length over leaves (i.e., each of the ’s is a state assignment). The likelihood of observing given the tree has the form

where is any internal node of the tree, ranges over all extensions of to the internal nodes of the tree, denotes the assigned state of node by , denotes the transition probability from character to character across an edge of length defined by a given evolutionary model and is the stationary distribution of this evolutionary model. The value of the likelihood does not depend on choice of due to the reversibility assumption.

We will also denote and refer to it as the log-likelihood function given the observed sequence. We allow the likelihood of a tree given data to be zero, and thus is defined on with values in the extended real line . We note that is continuous, that is, for any vector of branch lengths , we have

even if .

Each vector of branch lengths generates a distribution on the state assignment of the leaves, hereafter denoted by . We will make the following assumptions:

Assumption 2.1 (Model identifiability).

.

Assumption 2.2.

The data are generated on a tree topology with vector of branch lengths according to the above Markov process, where some components of might be zero. We assume further that the tree distance (the sum of branch lengths) between any pair of leaves of the true tree is strictly positive.

The second criterion ensures that no two leaves will be labeled with identical sequences as sequence length becomes long.

2.3. Regularized estimators for phylogenetic inference

Throughout the paper, we consider regularization-type estimators, which are defined as the minimizer of the phylogenetic likelihood function penalized with various :

(2.1)

Here denotes the penalty function and is the regularization parameter that controls how the penalty function impacts the estimates. Different forms of the penalty function will lead to different statistical estimators of the generating tree.

The existence of a minimizer as in is guaranteed by the following Lemma (proof in Appendix):

Lemma 2.3.

If the penalty is continuous on , then for and observed sequences , there exists a minimizing

We are especially interested in the ability of the estimators to detect polytomies and sampled ancestors. This leads us the following definition of topological consistency, which in the usual variable selection setting is sometimes called sparsistency.

Definition 2.4.

For any vector of branch lengths , we denote the index set of zero entries

We say a regularized estimator with penalty function is topologically consistent if for all data-generating branch lengths , we have

Definition 2.5 (Phylogenetic LASSO).

The phylogenetic LASSO estimator is (2.1) with the standard LASSO penalty , which in our setting of non-negative is

We will use to denote the phylogenetic LASSO estimate, namely

Definition 2.6 (Adaptive LASSO (Zou, 2006)).

The phylogenetic adaptive LASSO estimator is (2.1) with penalty function

for some and is the phylogenetic LASSO estimate.

Definition 2.7 (Multiple-step adaptive LASSO (Bühlmann and Meier, 2008)).

The phylogenetic multiple-step LASSO is defined recursively with the phylogenetic LASSO estimator as the base case , and the penalty function in (2.1) at step being

where and is the -step regularized estimator with penalty function .

3. Theoretical properties of LASSO-type regularized estimators for phylogenetic inference

We next show convergence and topological consistency of the LASSO-type phylogenetic estimates introduced in the previous section. As described in the introduction, phylogenetic LASSO is a non-convex regularization problem for which the true estimates lie on the boundary of a space on which the likelihood function is untamed. To circumvent those problems, we take a minor departure from the standard approach for analysis of non-convex regularization: instead of imposing regularity conditions directly on the empirical log-likelihood function, we investigate the expected per-site log likelihood and investigate its regularity. This function enables us to isolate the singular points and derive a local regularity condition that is similar to the Restricted Strong Convexity condition (Loh and Wainwright, 2013; Loh, 2017). This leads us to study the fast-rate generalization of the empirical log-likelihood in a PAC learning framework (Van Erven et al., 2015; Dinh et al., 2016).

3.1. Definitions and lemmas

We begin by setting the stage with needed definitions and lemmas. Most proofs are deferred to the Appendix.

Definition 3.1.

We define the expected per-site log-likelihood

for any vector of branch lengths .

Definition 3.2.

For any , we denote by the set of all branch length vectors such that for all state assignments to the leaves.

We have the following result, where is the -norm in .

Lemma 3.3 (Limit likelihood).

The vector is the unique maximizer of , and

(3.1)

Moreover, there exist and depending on such that

(3.2)
Proof.

The first statement follows from the identifiability assumption, and (3.1

) is a direct consequence of the Law of Large Numbers. Equation

3.2 follows from the Lojasiewicz inequality (Ji et al., 1992) for on , which applies because is an analytic function defined on the compact set with as its only maximizer in . ∎

Group-based DNA sequence evolution models are a class of relatively simple models that have transition matrix structure compatible with an algebraic group (Evans and Speed, 1993). From Lemma 6.1 of Dinh et al. (2016), we have

Remark 3.4.

For group-based models, .

For any , we also have the following estimates showing local Lipschitzness of the log-likelihood functions, recalling that is the number of sites.

Lemma 3.5.

For any , there exists a constant such that

(3.3)

and

(3.4)

for all .

Fix an arbitrary . For any we consider the excess loss

and derive a PAC lower bound on the deviation of the excess loss from its expected value on . First note that since the sites are independent and identically distributed, we have

Moreover, from Lemma 3.5, we have . Applying this in the case and noting that is the average of IID copies of , we have that . This implies by (3.2) that

(3.5)

for all .

Lemma 3.6.

Let be the set of all branch length vectors such that . Let be the constant in Lemma 3.3. For any and previously specified variables there exists (independent of ) such that for any , we have:

with probability greater than .

We also need the following preliminary lemma from (Dinh et al., 2016).

Lemma 3.7.

Given , there exist constants depending only on such that for all , if then .

3.2. Convergence and topological consistency of regularized phylogenetics

We now show convergence and topological consistency of , the regularized estimator (2.1) for various choices of penalty as the sequence length increases. For convenience, we will assume throughout this section that the parameters and (defined in the previous section) are fixed.

3.2.1. Convergence

We first have the following two lemmas guaranteeing that if is carefully chosen, a neighborhood of and the regularized estimator lie inside with high probability.

Lemma 3.8.

There exist and an open neighborhood of in such that .

Lemma 3.9.

If the sequence is bounded, then for any , there exist and such that for all , with probability at least .

We are now ready to prove a series of theorems establishing consistency and topological consistency of phylogenetic adaptive and multi-step adaptive LASSO. As part of this development we will first use as a hypothesis and then establish the technical condition that there exists a independent of such that

(3.6)

This will form an essential part of our recursive proof. As the first step in this project, choosing to satisfy these lemmas, we can use the deviation bound of Lemma 3.6 to prove

Theorem 3.10.

If then converges to almost surely.

Moreover, letting be the constant in Lemma 3.3, for any there exist and such that for all , with probability at least we have

(3.7)

If we assume further that there exists a independent of satisfying (3.6) then there exists such that for all ,

with probability at least .

Proof.

By definition of the estimator, we have

which is equivalent to .

We have with probability at least from Lemma 3.9 for sufficiently large. Therefore by Lemma 3.6,

with probability at least . The second case implies that

while for the first case, we have

since and . This demonstrates (3.7).

If the additional assumption (3.6) is satisfied, we also have

Using Lemma 3.7 with

we obtain

which implies

This completes the proof. ∎

3.2.2. Topological consistency

The goal of this section is to prove that the phylogenetic LASSO is able to detect zero edges, which then give polytomies and sampled ancestors. Since the estimators are defined recursively, we will establish these properties of adaptive and multi-step phylogenetic LASSO through an inductive argument. Throughout this section, we will continue to use to denote the regularized estimator (2.1). We will use to denote the corresponding adaptive estimator where and for some . We will use to be the regularizing parameter for the second step (regularizing with ) and keep as the parameter for the first step. These two need not be equal.

For positive sequences , we will use the notation to mean that . We have the following result showing consistency of adaptive LASSO, and setting the stage to show topological consistency of adaptive LASSO.

Theorem 3.11.

Assume that , and that

We have

  • and the estimator is consistent.

  • If there exists independent of satisfying (3.6) then the estimator is topologically consistent.

Proof.

We first note that by Theorem 3.10, the estimator is consistent, which guarantees almost surely. Thus

The hypotheses of this theorem imply that and thus by Theorem 3.10, we also deduce that is also a consistent estimator. This validates (i).

To establish topological consistency under (ii), we divide the proof into two steps.

As the first step, we prove that If for some , then from Theorem 3.10, we have

with probability at least . By the definition of , we have

which goes to infinity since by the hypotheses of the Theorem

Since is arbitrary, we deduce that with probability one.

Now for any branch length vector , we define as the vector obtained from by setting the component of to 0. By definition of the estimator , we have

or equivalently

Lemma 3.8 establishes that there exist, and a neighborhood of in such that . Since the estimator is consistent and , we can assume that both and belong to with large enough. Thus, from Lemma 3.5, we have

If , we deduce that is bounded from above by , which is a contradiction. This implies that , and we conclude that

As the second step, we prove that . Indeed, the consistency of guarantees that

almost surely. Therefore, if for some , then for large enough. In other words, we have .

Combing step 1 and step 2, we deduce that the adaptive estimator is topologically consistent. ∎

Lemma 3.12.

If is topologically consistent and is consistent, then there exists a independent of such that

Proof.

Since is topologically consistent and is consistent, we have

with probability one for sufficiently large . Defining , we have

via Cauchy-Schwarz which completes the proof. ∎

Theorem 3.13.

If

and

(3.8)

then

  • The adaptive LASSO and the -step LASSO are topologically consistent for all .

  • For all , the -step LASSO (including the phylogenetic LASSO and adaptive LASSO) are consistent. Moreover, for all and , there exists such that for all ,

    with probability at least . In other words, the convergence of -step LASSO is of order

    where denotes big-O-in-probability.

Proof.

We note that for the LASSO estimator, is uniformly bounded from above. Hence, the LASSO estimator is consistent. We can then use this as the base case to prove, by induction, that adaptive LASSO and the multiple-step LASSO are consistent via Theorem 3.11 (part (i)). Moreover, is uniformly Lipschitz and satisfies (3.6), so using part (ii) of Theorem 3.11, we deduce that adaptive LASSO (i.e., the estimator with penalty function ) is topologically consistent.

We will prove that the multiple-step LASSOs are topologically consistent by induction. Assume that is topologically consistent, and that is consistent. From Lemma 3.12, we deduce that there exists independent of such that

(3.9)

This enables us to use part (ii) of Theorem 3.11 to conclude that is topologically consistent. This inductive argument proves part (i) of the Theorem. We can now use (3.9) and Theorem 3.10 to derive the convergence rate of the estimators. ∎

Remark 3.14.

If we further assume that , then the results of Theorem 3.13 are valid if is independent of . This enables us to keep the regularizing parameters unchanged through successive applications of the multi-step estimator.

Similarly, the Theorem applies if and

for all .

Remark 3.15.

Consider the case (for example, for group-based models), and . If we choose (independent of ) such that

then the convergence of -step LASSO is of order

4. Algorithms

In this section, we aim to design a robust solver for the phylogenetic LASSO problem. Many efficient algorithms have been proposed for the LASSO minimization problem

(4.1)

for a variety of objective functions . When , Efron et al. (2004) introduced least angle regression (LARS) that computes not only the estimates but also the solution path efficiently. In more general settings, iterative shrinkage-thresholding algorithm (ISTA) is a typical proximal gradient method that utilizes an efficient and sparsity-promoting proximal mapping operator (also known as soft-thresholding operator) in each iteration. Adopting Nesterov’s acceleration technique, Beck and Teboulle (2009) proposed a fast ISTA (FISTA) that has been proved to significantly improve the convergence rate.

These previous algorithms do not directly apply to phylogenetic LASSO. LARS is mainly designed for regression and does not apply here. Classical proximal gradient methods are not directly applicable for the phylogenetic LASSO for the following reasons: (i) Nonconvexity. The negative log phylogenetic likelihood is usually non-convex. Therefore, the convergence analysis (which is described briefly in the following section 4.1) may not hold. Moreover, nonconvexity also makes it much harder to adapt to local smoothness which could lead to slow convergence. (ii) Bounded domain. ISTA and FISTA also assume there are no constraints while in phylogenetic inference we need the branches to be nonnegative: . (iii) Regions of infinite cost. Unlike normal cost functions, the negative phylogenetic log-likelihood can be infinite especially when is sparse as shown in the following proposition.

Proposition 4.1.

Let be an observed character vector on one site. If and there is a path on the topology such that , then

Proof.

Let be any extension of to the internal nodes. Since , there must be some such that . Therefore,

In what follows, we briefly review the proximal gradient methods (ISTA) and their accelerations (FISTA), and provide an extension of FISTA to accommodate the above issues.

4.1. Proximal Gradient Methods

Consider the nonsmooth regularized problem (4.1). Gradient descent generally does not work due to non-differentiability of the norm. The key insight of the proximal gradient method is to view the gradient descent update as a minimization of a local linear approximation to plus a quadratic term. This suggests the following update strategy

(4.2)

where (4.2) corresponds to the proximal map of , which is defined as follows

(4.3)

(4.3) is usually easy to solve if the regularization function is simple. For example, in case of , it can be solved by the soft thresholding operator

where . Applying this operator to (4.2), we get the ISTA update formula

(4.4)

Let . Assume is convex and is Lipschitz continuous with Lipschitz constant ; if a constant step size is used and , then ISTA converges at rate

(4.5)

where is the optimal solution. This means ISTA has sublinear convergence whenever the stepsize is in the interval . Note that ISTA could have linear convergence if is strongly convex. The convergence rate in (4.5) can be significantly improved using Nesterov’s acceleration technique. The acceleration comes from a weighted combination of the current and previous gradient directions, which is similar to gradient descent with momentum. This leads to Algorithm 1 which is essentially equivalent to the fast iterative shrinkage-thresholding algorithm (FISTA) introduced by Beck and Teboulle (2009). Under the same condition, FISTA enjoys a significantly faster convergence rate

(4.6)

Notice that the above convergence rates both require the stepsize . In practice, however, the Lipschitz coefficient is usually unavailable and backtracking line search is commonly used.

1:
2:Set
3:while not converged do
4:      Nesterov’s Acceleration
5:      Soft-Thresholding Operator
6:     
7:end while
8:
Algorithm 1 Fast Iterative Shrinkage-Thresholding Algorithm (FISTA)

4.2. Projected FISTA

FISTA usually assumes no constraints for the parameters. However, in the phylogenetic case branch lengths are must be non-negative (). To address this issue, we combine the projected gradient method (which can be viewed as proximal gradient as well) with FISTA to assure non-negative updates. We refer to this hybrid as projected FISTA (pFISTA). Note that a similar strategy has been adopted by Liu et al. (2016) in tight frames based magnetic resonance image reconstruction. Let be a convex feasible set, define the indicator function of the set :

With the constraint , we consider the following projected proximal gradient update

(4.7)

where . Using forward-backward splitting (see Combettes and Wajs, 2006), (4.7) can be approximated as

(4.8)

where is the Euclidean projection on to . When , we have the following pFISTA update formula

Note that in this case, (4.8) is actually exact. Similarly, we can easily derive the projected ISTA (pISTA) update formula and we omit it here.

4.3. Restarting

To accommodate non-convexity and possible infinities of the phylogenetic cost function, we adopt the restarting technique introduced by O’Donoghue and Candes (2013)

where they used it as a heuristic means of improving the convergence rate of accelerated gradient schemes. In the phylogenetic case, due to the non-convexity of negative phylogenetic log-likelihood, backtracking line search would fail to adapt to local smoothness which could lead to inefficient small step size. Moreover, the LASSO penalty will frequently push us into the “forbidden” zone

, especially when there are a lot of short branches. We therefore adjust the restarting criteria as follows:

  • Small stepsize: restart whenever .

  • Infinite cost: restart whenever