Synthetic likelihood method for reaction network inference

10/04/2018 ∙ by Daniel F. Linder, et al. ∙ Augusta University The Ohio State University 0

We propose a novel Markov chain Monte-Carlo (MCMC) method for reverse engineering the topological structure of stochastic reaction networks, a notoriously challenging problem that is relevant in many modern areas of research, like discovering gene regulatory networks or analyzing epidemic spread. The method relies on projecting the original time series trajectories onto information rich summary statistics and constructing the appropriate synthetic likelihood function to estimate reaction rates. The resulting estimates are consistent in the large volume limit and are obtained without employing complicated tuning strategies and expensive resampling as typically used by likelihood-free MCMC and approximate Bayesian methods. To illustrate run time improvements that can be achieved with our approach, we present a simulation study on inferring rates in a stochastic dynamical system arising from a density dependent Markov jump process. We then apply the method to two real data examples: the RNA-seq data from zebrafish experiment and the incidence data from 1665 plague outbreak at Eyam, England.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent developments in molecular technologies have allowed us to perform complex biological experiments aiming at learning the principles of signaling networks in living organisms. For instance, the knowledge of a biochemical network of physiological processes of living cells offers insights into developing targeted therapies for a wide range of diseases, such as cancer, diabetes and more, by suggesting possible targets in gene pathways. In addition, decoding these mechanisms in organisms with regeneration capabilities, like the zebrafish, can potentially help biologists at least partially answer questions about why humans lack these abilities. However, reverse engineering biochemical networks from observed data has proven to be challenging from both statistical and computational standpoints (oates2012network, ; daniels2015automated, ). Indeed, the high-throughput molecular technology pushes the boundary of a classical statistical inferential paradigm, as the molecular quantification methods such as next generation sequencing, cellular flow cytometry, and fluorescence microscopy (see, e.g. Perez:2004fk ; Wheeler:2008ys ), provide large amounts of high dimensional, longitudinal data from partially observed and poorly understood biochemical systems.

The goal of most statistical inference problems in this setting is either one of structure (topology) or parameter estimation (oates2014causal, )

. Although related, these two inference areas are often considered separately. The first addresses the structure of the underlying biochemical network, i.e. which chemical species directly alters the production rate of another. In this paradigm the object of interest is usually a directed graph or a binary adjacency matrix describing the gene regulation structure of the system. Bayesian networks

(pearl1985bayesian, ; pearl1986fusion, ), and, more recently, dynamic Bayesian networks, have been used for learning topology of such models since their graphical structure, which is encoded through distributional assumptions, is in essence the desired output (morrissey2010reverse, ; morrissey2011inferring, )

. Bayesian networks and other similar graphical models are appealing because they provide biologists with a simple visual representation of the network. However, the fundamental disadvantage of simple graphical approaches is their reduction of a detailed kinetic view of the system to a set of often unrealistic assumptions on the relationships between graph nodes (representing either chemical or molecular species, like genes). For instance, although it is known that the interactions of species in biological systems typically exhibit a high degree of nonlinearity, it is often assumed that nodes are hierarchically (acyclically) linearly related, with the conditional Gaussian distributions. A practical disadvantage of these methods is that their solutions are of non-polynomial time complexity, and hence algorithms used for model fitting lead to suboptimal solutions, for instance by requiring greedy algorithms. Markov chain Monte Carlo (MCMC) routines have been shown to give moderate improvement in terms of finding optimal network structure, but computational issues still persist.

In addition to network structure or topology estimation, the second related inference area is that of network parameters (e.g., reaction rates) estimation. Typically, the methods here focus on fitting detailed kinetic models to the data. Their appeal lies in the fact that they rely on more precise representations of the underlying dynamical system (fearnhead2014inference, ; golightly2005bayesian, ; golightly2012, ; oates2012network, ). Their disadvantages are numerous however, particularly of the methods based on exact likelihoods, where the inference is usually not feasible for systems of relevant sizes, i.e. hundreds to thousands of species and reactions. For that reason the corresponding Bayesian frameworks based on the exact likelihoods are usually not applicable to topology estimation (boys2008bayesian, ; choi2012inference, ). Methods based on approximate likelihood, like the linear noise approximation (or LNA, see (komorowski2009bayesian, )), fare somewhat better, but they also usually do not allow for efficient posterior sampling due to the complicated form of the likelihood approximation (see, for instance, (Linder:2015aa, )).

In this paper, we develop a fully Bayesian inference framework for stochastic reaction networks on both structure and parameters given time course trajectories from the stochastic dynamical systems under mass action kinetics. The method uses summary statistics to form a synthetic likelihood following the ideas presented by

Wood:2010aa . The advantage of the proposed methodology is that the synthetic likelihood is based on detailed kinetic models, and its form permits topology and parameter estimation simultaneously, instead of separately. Our approach is Bayesian and allows efficient computation of the posterior network structure through point mass priors on parameters. An outline of the paper beyond the current section is as follows. In Section 2 we describe the types of dynamical systems under consideration. In Section 3 we describe the synthetic likelihood, the relevant priors, and outline an efficient MCMC algorithm to sample posteriors. The model fitting is performed using in silico reaction network data in Section 4 and RNA-seq data from controlled experiments in the zebrafish in Section 5. In Section 5 we also give an example of application of our framework beyond biochemical systems by analyzing historic data from the 1665-1666 plague outbreak in Eyam, England. The technical details of the MCMC algorithm derivations and R code implementation are available in the online Supplementary Material.

2 Stochastic reaction network system

The reaction systems we consider consist of chemical species, along with a set of reaction channels where commonly . We denote the system state at time

as the vector

of dimension , containing molecular counts of each species. The constant gives the reaction rate or speed of the reaction, . When the reaction occurs at time , the system transitions according to the integer valued vector , where is a vector of non-negative integers representing the number of species produced by reaction and representing the number consumed, i.e.

where is the system state at the instantaneous time before . We denote by the system volume, typically the physical volume of the container (e.g., cell) times Avagadro’s number, and by the unit Poisson processes. Our further analysis is on systems that are well-stirred and thermally equilibrated, with processes obeying the classical mass action rate laws, (see, e.g., gillespie1992rigorous, ) corresponding combinatorially to the number of different ways we can choose molecular substrates needed for the reaction to occur (ethier2009markov, , chapter 10). Defining , the rates are

The nonhomogenous Poisson process with the above propensity function (see also (gillespie1992rigorous, )) gives the system time-evolution equation

(1)

The model in Equation (1) is often considered the most accurate representation of true system dynamics, and is in the general class of density dependent Markov jump processes (DDMJP). While the class of DDMJP models are often used to describe a wide variety of physical systems, like gene regulatory networks and stochastic epidemics, unfortunately the corresponding inference is complicated by highly intractable exact likelihoods.

Letting

we obtain species concentrations (say, in moles per unit volume or relative density). The asymptotic notion of a large volume limit represents the system’s behavior as its volume increases to infinity while the species numbers are kept at constant concentrations. This gives the classical deterministic law of mass action ordinary differential equation (ODE), which is referred in the chemical literature

(van1992stochastic, ) as the reaction rate equation

(2)

The solution of the above ODE is parameterized by a vector , a linear combination of the kinetic rates of interest, as well as the initial condition . In what follows we will focus on estimation of

under the assumption that it is a linear transformation of the

’s. It is well known that identifiability of reaction networks is a nontrivial problem and is only guaranteed when certain reaction vectors are linearly independent for each source complex (craciun2008identifiability, ). However, the reparameterization from to can be done so as to ensure that is identifiable, as is the case for the examples we consider below. Results in Remp12 show that the least squares estimator, , which minimizes the distance between the data and the solution to (2) is consistent and asymptotically normal. These asymptotic properties are also true for solutions to the so-called martingale estimating functions, which are a generalization of the least squares estimator in this case (bibby1995martingale, ). Both methods produce statistics that are easily obtained from time course trajectories of the system. In what follows, we use the asymptotic properties of these statistics and their estimating equations to form a synthetic likelihood. The synthetic likelihood serves as a surrogate for, often intractable, exact likelihood and may be used in the same way to perform the usual Bayesian inference.

3 Synthetic likelihood

Ideally, parametric inference would be based on the likelihood function since, under the typical regularity conditions, maximum likelihood estimates enjoy good asymptotic properties, such as consistency and efficiency. The likelihood approach also gives one the ability to perform Bayesian inference. Unfortunately, the usage of exact likelihood methods for parameter estimation in stochastic biochemical networks faces some challenges due to the need for computationally demanding routines, like, for instance, the particle filters (Golightly:2011aa, ). For that reason many authors have focused on approximate likelihood methods for reaction networks (fearnhead2014inference, ; golightly2005bayesian, ; golightly2012, ; oates2012network, ). However, major practical limitation of many such methods, for instance, approximate Bayesian computation (ABC) is their slow convergence and poor mixing in high dimensional problems (csillery2010approximate, ). To circumvent these technical complication we propose here an alternative method that is based on the idea of synthetic likelihood (Wood:2010aa, ).

To introduce some notation, consider the data or the system trajectory consisting of , which are the observed counts of species, measured at a discrete grid of timepoints, , ( not necessarily equidistant across trajectories, and with possibly different endpoints; i.e., not necessarly equal to . We assume that this observed data arrives from trajectories of the process for which the system volume is fixed and known and define the concentration values as . The LSE for the observation is then any solution of the optimization problem

(3)

or equivalently any solution to the following estimating equation

(4)

Asymptotic properties, (as ), and the regularity conditions for the consistency and normality of these solutions were discussed in Remp12 , with all systems under consideration here satisfying these conditions. The expression above is similar in form to the generalized estimating equations (GEEs) (Zeger:1986aa, ). GEEs have been used extensively for correlated and longitudinal data where a parametric form of the mean is known but the likelihood function is not readily available. The appeal of GEE estimates is that they exhibit many of the same properties of maximum likelihood estimates (Zeger:1986aa, ), even when the correlation structure of the dependent observations is misspecified.
The ideas from GEEs were extended to discretely observed diffusion processes in bibby1995martingale by considering so-called martingale estimating functions (MEF). Defining the filtration , scaling the process by , consider all zero-mean -martingale estimating functions of the form

(5)

where is measurable and . It was also shown in bibby1995martingale that the optimal estimating function, in the sense of the smallest asymptotic dispersion and where the data is assumed to arise from discrete observations of a diffusion, is of the form with . The optimal estimating function may be approximated by, , with the substitutions as well as . Fitting may then be done iteratively by first updating the weights in (5) for the current value of with and then updating by solving (5) and repeating this process until convergence, or by replacing with its empirical estimate.

3.1 Form of the Synthetic Likelihood

Given the partially observed trajectory, , we denote by the solution of either (4) or (5). In the mass action setting, the ODE coefficients are linear combinations of the reaction rates and can be written as where is a matrix (see, e.g., (Linder:2013aa, )). Some specific examples of and are given in Section 4 below. It is straightforward to show that is asymptotically Gaussian; i.e., as previously mentioned, see Supplementary Material. The normality of allows to express the synthetic likelihood for the trajectory (replicate) as

(6)

where is the corresponding limiting covariance matrix. We have maintained the standard likelihood notational convention by writing it as a function of parameters, given data. This is in sharp contrast to the majority of ABC type methods, where such data summaries are typically chosen in an ad-hoc fashion. Unfortunately, the Pitman-Koopman-Darmois (PKD) theorem essentially guarantees the failure of ABC type methods, since it states that the existence of finite dimensional sufficient statistics is a unique property of the exponential family. The implications of PKD are thus quite disappointing in the context of ABC methods, where typically the interesting (non-analytic) likelihoods are outside of the exponential family. Consequently, ABC methods based on finite dimensional summary statistics are typically guaranteed to suffer information loss vis a vis exact likelihood inference.

Consider the trajectory and the vector of stacked species concentrations,

; i.e., by stacking the concentration vectors at each timepoint. The central limit theorem for DDMJP then gives

, so that the trajectory data likelihood, converges as to the Gaussian likelihood

. The law of large numbers and consistency of

imply that , so that

The above approximation is seen to hint at a notion of asymptotic sufficiency (AS) in the sense of le1956asymptotic . AS is essentially an asymptotic Neyman-Fisher factorization, and it implies that, at least asymptotically, the chosen statistics contain meaningful information, thus offering some notion of efficiency, even in light of the PKD theorem. However, establishing AS property more formally requires careful analysis of specific inferential problems on case by case basis. For further discussion, see, for instance, frazier2016 .

It is important to note that in the above arguments, is the process covariance matrix for the stacked concentration vector in the approximate data likelihood and not the covariance of the summary statistics () in the synthetic likelihood. In fact, since also depends on , its usage in the likelihood function above would break down the conjugacy and the efficient MCMC via Gibbs sampling would be no longer available. Thus, when we have a single trajectory, as in the Eyam plague example below, we use the asymptotic covariance matrix of the summary statistic, for , to form the synthetic likelihood, although we have suppressed the explicit subscript notation for simplicity. When multiple trajectories are available, we additionally have the ability to assess the between trajectory variation of the summary statistics. See below for details.

The transition from the original likelihood to the synthetic likelihood shifts our analysis into the setting of a classical linear model. Thus, this approach is vastly different from most of the currently used ones. Specifically, methods based on the reaction rate equation use ODE solutions as the means of corresponding Gaussian likelihoods (girolami2008bayesian, ), for instance, those based on the diffusion approximation and the LNA (komorowski2009bayesian, ; fearnhead2014inference, ). These approaches, while being approximations, still face serious computational challenges. The main bottleneck for inference, particularly Bayesian one, in these models is non-conjugacy, since the ODE means are not linear in the parameters. As such, each iteration of MCMC requires solving complex systems of nonautonomous ODEs at each proposal value, like in girolami2008bayesian . This can make tuning proposal distributions with good acceptance properties difficult, leading to chains with poor mixing. The ABC methods do not fare much better, since they require summary statistic, distance measure, and tolerance selection that are often ad-hoc. These problems severely limit the applicability and scalability of the current approximate procedures. In contrast, our synthetic likelihood approach circumvents the need to choose distance measures and tolerance levels by using data summaries that are well understood, and allow for their analysis via standard MCMC. It also only involves solving ODE systems once (to compute initial summary statistics and covariances) thus avoiding the iterative usage of the ODE solver. Finally, the synthetic likelihood form leads itself to the efficient formulation of the MCMC computation steps via a Metropolis-within-Gibb’s procedure.

3.2 Prior Specification

The Bayesian approach to network estimation can be addressed by using specialized priors that allow coefficients to be in the model or out of the model during iterations of MCMC, leading to positive posterior probabilities of zero value.Various mixtures of mutually singular distributions, each dominated by

-finite measures are a natural choice. Here, we assume a discrete mixture of the point mass at zero, , and a continuous distribution, , supported on the positive reals and dominated by the Lebesgue measure . Restricting the support of to positive reals is necessary to convey the fact that kinetic parameters are non-negative. It was shown in gottardo2008markov

that when the prior probability of non-zero rate for reaction

is , the corresponding density for is of the form

(7)

where . Priors of the form (7) lead to minimax rates of estimation and posterior contraction on sparse sets, provided the tails of are exponential or heavier (see, e.g., johnstone2004needles ; castillo2012needles ). Thus, priors of the form (7) are optimal under certain criteria and are usually considered the theoretical gold standard for variable selection in the Bayesian setting. To that end, we assume that , which is the exponential, or one-sided Laplace, distribution. Our choice of the exponential is motivated by several important factors: it naturally restricts the support of to positive reals, it satisfies the tail requirements mentioned previously, and from the information theoretic perspective it is the maximum entropy prior with mean (see (robert2007bayesian, ), Chapter 3). It may also be rewritten hierarchically as , with , via the identity (west1987scale, ; andrews1974scale, ). By writing the prior as a scaled mixture of truncated normals, the form of ’s full conditionals becomes analytically tractable, as is demonstrated in the Supplementary Material. This gives significant computational advantage to our framework, since priors like (7) often lead to non-conjugacy, in which case full Metropolis-Hastings (MH) is required for sampling. Although adaptive MH-based MCMC algorithms have been designed to produce chains with desirable acceptance rates (ji2013adaptive, ), when the likelihood is complicated or not analytic, as in the present situation, such tuning is not straightforward. Thus when appropriate tuning cannot be done, the resulting chains may exhibit poor mixing and require extremely long run times for sufficient exploration of parameter space. In what follows, we detail the Gibbs sampling procedure, which performs tuning automatically, for posterior sampling under our hierarchical model. Our primary interest is in obtaining the posterior probability that reactions are true, which allows one to infer the reaction network structure. In order to accommodate varying experimental conditions, such as differences in measurement error or experiments with data collected at different timepoints, we place a Wishart prior on the covariance matrix, . This is in contrast to the common assumption of the inverse-Wishart, see daniels1999nonconjugate ; bouriga2013estimation ; alvarez2014bayesian for some examples. Our approach is similar to chung2015weakly , in that we assume a Wishart (not inverse-Wishart) prior for

that leads to desirable modal properties of the posterior. We select the hyperparameter

to be the empirical covariance of , and when this estimate is not full rank we add a small regularization term, , to its diagonals. The hierarchical model under consideration is then

(8)

where indexes the independent trajectories of the process. The model contains a covariance term, , and this parameter may represent intrinsic stochastic noise, as well as measurement error which will dominate in the large volume limit. This parameter is not of particular interest for network or kinetic rate estimation, and a clear advantage of the Bayesian framework is the ability to marginalize this nuisance parameter out of the posterior. Importantly, we show that the marginal synthetic posterior distribution, is unimodal when and (see Supplementary Material). This is a key property of the proposed method that not only guarantees identifiability of the reaction network but also contributes to the observed rapid mixing of the MCMC procedure.

3.3 Posterior Computation

Here we describe the algorithm to efficiently sample from the posterior distribution with a Metropolis-within-Gibbs sampler. To simplify notation, define and . The term is the prior probability that reaction channel is true (non-zero). The posterior computation can then be performed with the following steps.
Algorithm 1.
Step 1. For , compute where

(9)

With probability , sample from the truncated Gaussian and then from inverse Gaussian,

Else, set with probability .
Step 2. Given the current sample , propose from Wishart,
Step 3. Accept with probability
Step 4. Recompute and . Return to step 1.

In the above notation , is the element of , is a Gaussian random variate with mean

and variance

, is the Wishart density evaluated at with scale matrix

and degrees of freedom

, is an inverse Gaussian random variate with mean and scale , and

is the Gaussian cumulative distribution function with mean

and variance evaluated at . We have found that a proposal degrees of freedom, , gives relatively good acceptance rates, between % in our empirical studies. Derivations of the full conditionals may be found in the Supplementary Material. Hence, sampling from the full conditional of is done by sampling from each ’s second mixture with probability and from the degenerate component with corresponding probability . Expressions for the individual parameters’ and weights’ full conditionals, and , allow for sampling from the target distribution by local parameter-wise updates. Although global moves can lead to optimal acceptance rates, tuning proposals that must be absolutely continuous with respect to measures like is not straightforward, and even less so for likelihood-free methods. Additionally, the scheme allows inference about posterior reaction probabilities to be improved via Rao-Blackwellization (gottardo2008markov, ). In the remaining sections we illustrate the usage and performance of Algorithm 1 with both simulated and real data examples.

4 Simulation Study

To illustrate network topology estimation using the proposed synthetic likelihood approach, we consider a molecular reaction network partially motivated by the heat shock response. Heat shock transcription factors and protein chaperones are critical to ensure proteins fold into specific three-dimensional structures. Newly formed proteins and proteins within cells that have been challenged with damage risk protein misfoldings that may effect their functional activity hartl2011molecular . Accumulation of such toxic species (misfolded proteins) has been implicated in the progression of certain neurodegenerative diseases neef2011heat ; hartl2011molecular , and has lead to research into development of theraputic targets that restore proteostasis calamini2012small . Hence, modeling the cells ability to employ this chaperone machinery to acheive proteostasis may reveal theraputic targets. As a toy in silico, we consider the following reaction network that has transcriptional and chaperone components, along with redundant reactions, to compare the proposed methodology with existing ones via simulation.

(10)

Here and are representing proteins and represents gene RNA expression, with the transitions in/out of indicating loss/creation of a molecule. For the system of reactions (4) the mass action ODE (2) parameterized by specializes to

(11)

We note that in this particular case , where

In our present setting , and . For our simulation study we generated trajectories from the pure jump process of the system of reactions (4) via Gillespie’s algorithm (see, e.g., (van1992stochastic, )) with parameters and initial molecular copy numbers of 50 for each of the three species. Note that under this set of kinetic parameters enters the system from an external source and acts as a transcription factor for and a chaperone for , which we model by reactions 1, 3, and 6 (these are labeled by their respective subscripts in (4)). acts as a suppressor of the transcription of through reaction 9 and all species have a natural degradation rate through reactions 10, 11, and 12 respectively. All others reactions are superfluous.

We calculated the required LSE and MEF -based statistics by fitting the mass action ODE in (4) to simulated stochastic trajectories from (4). We set the degrees of freedom hyperparameter to , for equal a priori probability that a reaction channel is true or false, and , which in combination with guarantees a unimodal posterior. For comparison, we perform analysis using the adaptive MCMC routine of vihola2012robust with the LNA likelihood approximation and uniform priors on the logarithm of parameter values finkenstadt2013quantifying . Additionally, we implemented the particle filtering routine of Golightly:2011aa , which computes unbiased likelihood estimates within MCMC using 100 particles generated via Gillespie’s algorithm and assigned uniform priors on the logarithm of parameter values. Tables 1 and 2 contain the posterior median estimates from chains of 50,000 MCMC samples from the MEF-based and LSE-based synthetic likelihood method. For the class of point mass mixture priors, the posterior median has been proven to be a legitimate thresholding rule, see johnstone2004needles , so it may be used for both variable selection and estimation simultaneously in our setup. Tables 3 and 4 give posterior means from LNA analysis with 50,000 samples and 10,000 samples from the particle marginal Metropolis-Hastings algorithm respectively. All algorithms are coded in R and run on a personal desktop computer with 2.7 GHz clock speed.

Time
Truth 1 0 1 0 0 1 0 0 0.5 1 1 1 Seconds
0.38 0.00 0.63 1.12 0.00 8.56 0.00 0.00 1.96 0.47 0.21 0.00 47.33
0.78 0.00 0.70 1.03 0.00 4.34 0.00 0.00 1.24 0.60 0.89 0.00 49.91
0.60 0.00 0.86 0.68 0.00 6.61 0.00 0.00 1.57 0.83 0.38 0.29 50.20
0.74 0.00 0.92 0.41 0.00 5.50 0.00 0.00 1.14 0.88 0.58 0.58 52.23
Table 1: Posterior median of from 50,000 MCMC samples for trajectories using LSE.
Time
Truth 1 0 1 0 0 1 0 0 0.5 1 1 1 Seconds
1.10 0.00 0.75 1.89 0.00 1.09 0.00 0.01 0.93 0.70 1.31 2.84 46.97
1.04 0.00 0.85 0.95 0.00 1.04 0.00 0.00 0.73 0.82 1.13 2.13 53.17
1.02 0.00 0.90 0.63 0.00 1.02 0.00 0.00 0.67 0.88 1.08 1.82 52.55
1.07 0.00 0.93 0.39 0.00 1.04 0.00 0.07 0.60 1.64 1.15 1.49 53.44
Table 2: Posterior median of from 50,000 MCMC samples for trajectories using MEF.
Time
Truth 1 0 1 0 0 1 0 0 0.5 1 1 1 Seconds
0.95 0.00 0.79 1.52 0.00 0.40 0.00 1.11 0.02 1.85 1.02 0.01 998356.82
1.06 0.00 0.84 1.33 0.00 0.12 0.00 0.00 0.25 0.82 1.22 0.01 1002223.98
1.01 0.00 0.93 1.30 0.00 0.10 0.00 0.00 0.17 0.96 1.02 0.00 998999.92
1.05 0.00 0.90 1.19 0.00 0.54 0.00 0.00 0.29 0.91 1.14 0.01 998811.77
Table 3: Posterior mean of from 50,000 MCMC samples for trajectories using LNA.
Time
Truth 1 0 1 0 0 1 0 0 0.5 1 1 1 Seconds
0.62 0.01 0.83 0.71 0.01 6.05 0.01 0.15 3.01 0.89 0.43 1.08 1518414.34
1.02 0.01 0.94 0.05 0.02 0.24 0.02 0.05 7.23 0.91 1.04 1.56 1546878.77
0.95 0.00 0.99 0.31 0.00 0.29 0.01 0.08 3.54 1.07 0.83 0.93 1546693.39
0.63 0.00 0.90 0.07 0.01 7.30 0.01 0.05 3.30 0.87 0.40 1.13 1517892.47
Table 4: Posterior mean of from 10,000 MCMC samples for trajectories using particle filtering.

The results in Tables 1 and 2 indicate that both synthetic likelihood methods have performed reasonably well in the example, with the MEF-based synthetic likelihood analysis performing the best. The improvement of the MEF-based synthetic likelihood over the LSE-based one is likely due to a generally better efficiency of MEF over that of LSE. There is also apparently some degree of bias in the LSE estimate of , even with increasing . MEF-based likelihood analysis assigned to the true reactions non-zero posterior medians for all sample sizes, while assigning zero posterior medians to nearly all the false reactions for two or more trajectories, with the exception of . MEF-based analysis performed better than LSE-based analysis for all reactions and for each number of trajectories. Both LSE and MEF-based analysis gave high posterior probability to the protein being a transcription factor for (reaction 3) as well as acting as a suppressor of transcription of (reaction 9), which was indeed the case in the simulation. Since in actual experiments such reactions often indicate drug targets, the fact that our method was able to correctly identify them is of practical relevance. The results for both the LNA and particle filtering, located in Tables 3 and 4, show that their corresponding run times are already unacceptable in this moderate sized system. Indeed, we found that obtaining just 10,000 samples from the posterior distribution with the particle marginal Metropolis-Hastings implementation required more than two weeks of CPU time. Collecting 50,000 samples from the posterior distribution using the LNA, with adaptive MCMC for optimal acceptance rates of 0.234, took approximately 12 days of CPU time. The bottleneck of computation for the particle filtering is that unbiased likelihood estimates require sampling many trajectories, in our case 100, for each likelihood evaluation. For the LNA based analysis, each likelihood evaluation requires solving a system of non-autonomous ODEs. The particular examples in Tables 3 and 4 illustrate the generally accepted view that, at least until now, most of the current methods that rely on detailed system modeling do not scale well. Not only did the methods perform poorly in terms of long run times, they also produced estimates that appear biased away from the true values of certain parameters, even as the number of trajectories increases. For the particle filtering routine, estimates for and have a high degree of bias, whereas the LNA-based analysis appears to incorrectly infer as true (non zero) and as false (zero), even with all 5 trajectories data provided. The synthetic likelihood methods perform better, in terms of inference and computation, by projecting a high dimensional noisy trajectory into a lower dimensional statistic that captures the important dynamical information with less noise. It appears that for the full Metropolis type methods, the variable selection priors, like the point mass ones used in our synthetic likelihood methods, would likely pose even greater computational challenges than the continuous priors applied in our examples here since the routines for tuning the necessary proposals are not straightforward for the LNA, the particle filter, or any Metropolis type sampling with intractable likelihoods.

Figure 1: Auto-correlation plots of the output from MEF-based synthetic likelihood (black), LSE-based synthetic likelihood (red), LNA (blue), and particle marginal Metropolis-Hastings (green) for .

The plots in Figure 1 indicate that the chains resulting from the LNA and particle filtering have a much higher degree of auto-correlation as compared to the synthetic likelihood methods. Thus, in addition to the increased computational time of each MCMC sample, more samples are required in order for the LNA and particle filter to sufficiently explore parameter space in our current setting. We conclude by these plots that although adaptive MCMC was used, at least for the LNA likelihood approximation, the resulting chains exhibit poor mixing. Although theoretically Metropolis-Hastings type samplers can be tuned to produce optimal acceptance properties, the results here highlight the general difficulty of tuning in the presence of complicated or intractable likelihoods.

5 Data Examples

5.1 RNA-Seq Data

We now compare the performance of our synthetic method to that of the algebraic statistical model (ASM). The method was introduced in craciun2009algebraic to learn biochemical network topology from the empirical patterns of the reaction stoichiometries (). To facilitate the comparison, we re-analyze a dataset introduced in Linder:2013aa and consisting of the longitudinal RNA-seq measurements from the retinal tissue in the zebrafish (Danio rerio). The study was performed to probe the regenerative properties of the zebrafish retina after it sustained cell-specific damage. One interest of the study was in analyzing a particular sub-system, consisting of the following species: heat shock protein transcription factor (Hsp70), signal transducer and activator of transcription 3 (Stat3), and the suppressor of cytokine signaling 3 (Socs3). For more details on the experiment, see Linder:2013aa . The network of interest has the form

(12)

In the above network, we are especially interested in the possible activation of the heat shock response via Stat3. The detailed analysis via ASM based on all 8 trajectories of the experiment was presented in Linder:2013aa , where the topology of the conic (i.e., single-source) sub-network in Figure 2 was learned. We may thus compare the proposed synthetic method’s results based on LSE with the results based on ASM for the same dataset. As previously mentioned, the proposed method also allows for computation of posterior probabilities via empricial estimates of the posterior weights; i.e., since the dominating measure is and not merely . To this end, we compute and report the posterior probabilities by simulating 50,000 MCMC samples from the model in (3.2) under the same hyperparameter assumptions, as in the previous section.

Figure 2: Stat3 Conic Network. Note that the only source for 4 different products is Stat3.

The results in Table 5 indicate that reactions 4, 5, and 6 are likely true, while reaction 8 may only occur on a much longer time scale. Since both methods produce similar network topologies, we mention some advantages of the proposed model over ASM. While the appeal of the ASM is that it exploits the geometry of the stoichiometric matrix, the proposed method based on synthetic likelihood does so as well, in a sense, through the entries of matrix. A practical limitation of the ASM is that it enforces the cone-wise assumption that exactly reactions are true, which will typically not be the case. Similarly to the synthetic method, ASM also tacitly assumes a large volume setting, , however, unlike for the synthetic method, the ASM inference problem is only asymptotically well-posed and only on the set of posterior probabilities for . Thus ASM is strictly a topology learning routine, and not capable of kinetic parameter estimation. In contrast, (even though we did not present the results in this section for brevity) the parameter estimation may be easily carried out with the proposed synthetic approach by analyzing the posterior distribution and selecting the point estimates of , as was done in the previous section. For illustration, we present the bivariate contour plots of the posterior distribution for the reaction rates from the sub-system of interest in Figure 3. Our main observation is that the empirical plots indeed agree with our theoretical results on the unimodality of the posterior distribution.

Reaction Source Reaction Output Synthetic Likelihood ASM
Table 5: Synthetic likelihood and ASM reaction probabilities
Figure 3: Bivariate contour plots of posterior distribution from synthetic likelihood. Empirical density estimates increase from light gray to light blue.

5.2 The Plague at Eyam

In the seventeeth century, following the Great Plague of London the village of Eyam, Derbyshire, England, experienced an outbreak of plague, caused by the bacterium Yersinia pestis. In this section we analyze data from this outbreak that occurred at Eyam in 1665-1666. See also whittles2016epidemiological ; dean2018human for further discussion of the dataset and the relevant historic context.

Several features about the plague outbreak at Eyam make its study somewhat unique. The first of these features is that the village rector, a Reverand William Mompesson, reportededly convinced the villagers to self-quarantine. Although recent evidence suggests that a few of the wealthy residents may have fled (it is reported that Mompesson sent his children away before the quarantine), we may effectively treat the plague at Eyam as an outbreak in closed population. The names and burial dates of plague victims were recorded by Mompesson. Further, the parish records combined with the hearth tax record for Eyam in 1664 provide detailed information on the villagers, such as their sex, approximate date of birth, date of burial, and household information. This curated version of the Eyam parish register has lead to a newly revised estimate of a total village population of around 700, from an initially reported 350.

As the account goes, a tailor at Eyam received a shipment of cloth from London that was carrying plague infected rat fleas, and the first infected victim is believed to have come in contact with this cloth. As the infected flea’s digestive system becomes blocked by the bacterium, the flea vomits into the bite wound, thus transmitting Y. pestis. This transmission mechanism is now medically confirmed as giving rise to the bubonic form of plague. On September 7th 1665 the first burial due to plague was for a George Viccars. Over the next nine months, a somewhat constrained outbreak occurred in the Eyam villagers, of which 77 deaths have been attributed to plague. Around mid-May 1666 a second wave of the outbreak began to spread, and during the ensuing months from June 1666 through October 1666 had decimated the village, killing some 257 villagers.

While the rodent-to-human transmission route via the rat flea is understood to be critical for the initial outbreak dynamics, this particular mode of transmission alone does not fully explain the observed rapidity of the various plague outbreaks throughout Europe. This was also argued, at least qualitatively, based on the empirical differences in the early outbreak dynamics compared to the latter months at Eyam, raggett1982stochastic . An apparent lack of recorded rat falls (large scale rat deaths) during these outbreaks provides further evidence that additional transmission mechanisms were also critical for disease spread. Rat falls are generally considered necessary to cause sufficient flea jumpings from rat corpses onto humans. While human-to-human contact has been recognized as a component of the plague transmission process, through plague pneumonia and more recently via ectoparasites, such as lice and the human flea, recent analyses suggest that this transmission route may be far more important than previously recognized (whittles2016epidemiological, ; dean2018human, ).

We set out to analyze the Eyam plague data that was reported in raggett1982stochastic , which we have augmented to account for the more recent information on the total population reported in whittles2016epidemiological .

Figure 4: Susceptible, Infected, Recovered (SIR) compartmental model for the Eyam plague.

Figure 4 illustrates the compartmental SIR model that we consider for analysis of the Eyam data. The , , and compartments represents the number of susceptibles, infectious, and removed individuals, which we denote at time by , , and . While we have labeled the compartment removed as is standard in the typical SIR notation, the Eyam plague was almost universally fatal for infected. There were only three alleged recoveries, of which none were reported in the data sources, so that effectively represents deaths. We represent the rodent-flea compartments and its contribution to the infectious pressure on susceptibles as . This is essentially an assumption that the infectious pressure from non-human interaction is proportional to the number of susceptibles. While this may not be a completely accurate description, the outbreak period that we analyze was during early to late summer, leading one to suspect that the infectious pressure from the compartment may have been approximately constant. According to the SIR model, susceptibles () make infectious contact and then transition to compartment at rate . Finally, infected individuals die and transition to compartment at rate .

The compartment specific prevalence estimates, reported in raggett1982stochastic , were updated with the new population total and are displayed in Table 6. We have focused attention on the second phase of the outbreak that occurred in the summer of 1666 to assess the evidence on whether a particular transmission route was more important than another. Since the Eyam data analysis assumes a closed population one may readily recover the compartment at time by the formula , with .

Date 1666
May 18 612 1
June 18 593 7
July 19 540 22
August 19 460 20
September 19 436 8
October 20 422 0
Table 6: Plague data for Eyam 1666

The corresponding mass action ODE then for the above system has the form

(13)

We use the plague data in Table 6 to compute the MEF statistic using the approach described above. The MEF statistic is computed by minimizing the weighted sum of squared distances between the observed trajectory and ODE solution of (5.2), weighted by the asymptotic process covariance at each timepoint. The initial condition is given as the compartment specific prevalence estimates in May 18 1666. For the Eyam data we have the single outbreak trajectory; i.e., no replicates, and we thus use the asymptotic covariance estimate of the MEF statistic for the fixed covariance term to construct the synthetic likelihood. Also,

, indicating that the unknown rate parameters are directly related to the summary statistics through the identity matrix. We collected 50,000 MCMC samples via the synthetic likelihood method described above with a burn-in of 5,000. The corresponding posterior medians and 95% credible intervals for the parameters are

, , and .

We note that the results from our analysis agree qualitatively with the results in whittles2016epidemiological ; dean2018human concerning the role of human-to-human transmission of plague. While we have not made explicit assumptions about what the exact form of the human-to-human transmission mechanism (i.e, plague pneumonia, ectoparasites or some other form) the data from the latter months of the outbreak at Eyam nonetheless suggest that in our simple modified SIR model human-to-human contact was important. This was already suggested early on from the historical accounts that the plague at Eyam could be transmitted from the cough of a plague victim, suggesting plague pneumonia transmission. Further, the posterior median for of zero indicates the corresponding link in our modified SIR model, that accounts for infectious pressure from the rodent-flea route, could be negligible, at least during the latter part of the outbreak.

There are several limitations of this analysis that should be noted here. The first is that we have restricted our attention to the latter months of the outbreak at Eyam, during which it was apparent that the dynamics had changed from those of the initial outbreak. By doing this we are potentially missing information about the nature of the initial dynamics, which may point to a different transmission route as being important early on. Indeed, results from whittles2016epidemiological suggest that approximately 27% of infections were caused by rodents and 73% from human-to-human transmission by using the full outbreak data. This leads to another limitation, in that we have relied on the prevalence estimates reported in raggett1982stochastic , updated with new population totals. These compartment prevalences were estimated from the historical and death records, so are likely subject to measurement error, which we have not accounted for. Further, we have not used data on household structure and composition, although this is part of planned future work. Finally, while the augmented SIR type model we have used is somewhat similar to the SEIR model used in whittles2016epidemiological , it does not consider explicit plague pneumonia vs. ectoparasite driven human-to-human transmission separately, as was done in dean2018human . Hence, our analysis only adds to the evidence that some form of human-to-human transmission, which we modeled with a generic SIR framework, was important but does not distinguish between particular forms of this transmission.

6 Conclusions

We have described a method which can be used to perform estimation of biochemical networks as often considered in the context of dynamic gene regulatory networks and stochastic epidemic models. It is well established that this is a notoriously difficult problem, due to the intractability of the likelihood under partially observed trajectories. The underlying theme in most of the popular approaches in this area is to use likelihood approximations to perform approximate inference, such as in golightly2005bayesian ; girolami2008bayesian ; komorowski2009bayesian ; golightly2012 ; finkenstadt2013quantifying ; fearnhead2014inference . While our approach adopts this theme, it is fundamentally different than the standard approximate and likelihood-free inferential techniques. The most important of these differences is that the data summary statistics used here (LSE or MEF estimates) have properties that are well understood and are directly related to the unknown kinetic parameters. These properties justify, via the asymptotic normality, a parametric form for the synthetic likelihood, in the spirit of Wood:2010aa , that is linear in the parameters of interest. Hence, we have demonstrated that projections of the species’ trajectories into sets of dynamically informative statistics allows for highly efficient posterior sampling and a procedure that should scale well in large systems.

Acknowledgements

The first author would like to thank the Mathematical Biosciences Institute (MBI) at Ohio State University, for partially supporting this research through an Early Career Award. MBI receives its funding through the National Science Foundation grant DMS 1440386.

Appendix A Proofs of Propositions

Proposition 1.

The statistics, , of Equations 4 and 5 computed from the trajectory of data arising from the DDMJP in Equation 1 are asymptotically sufficient for as .

Proof.

Let be the trajectory arising from the DDMJP in Equation 1. Assume that , and hence , is the true parameter and let be a solution to in Equation 5, which necessarily satisfies

implying that

(14)

Taylor’s expansion of the left hand side of Equation (14) about and scaling by gives

(15)

where and the higher order terms in the expansion vanish since is consistent for , under the regularity conditions in Remp12 . Consistency and the asymptotic normality of , imply that , where is the limiting covariance, see Linder:2013aa ; Linder:2015aa . ∎

Proposition 2.

The point mass mixture prior, , in Equation 9 with
and is logarithmically concave in , and hence is unimodal.

Proof.

We prove this result component-wise and the result for the full vector follows. The exponential density, belongs to the class of log-concave densities; i.e., for any we have for . Thus, if and are both positive we have

When both and are zero we have . When only one is zero, say , then

Thus, the prior is logarithmically concave in and hence is also unimodal. ∎

Proposition 3.

If and , the marginal synthetic posterior distribution from the synthetic likelihood model, , is unimodal.

Proof.

The synthetic posterior is proportional to