Modeling of Missing Dynamical Systems: Deriving Parametric Models using a Nonparametric Framework

05/17/2019 ∙ by Shixiao W. Jiang, et al. ∙ Penn State University 0

In this paper, we consider modeling missing dynamics with a non-Markovian transition density, constructed using the theory of kernel embedding of conditional distributions on appropriate Reproducing Kernel Hilbert Spaces (RKHS), equipped with orthonormal basis functions. Depending on the choice of the basis functions, the resulting closure from this nonparametric modeling formulation is in the form of parametric models. This suggests that the successes of various parametric modeling approaches that were proposed in various domain of applications can be understood through the RKHS representations. When the missing dynamical terms evolve faster than the relevant observable of interest, the proposed approach is consistent with the effective dynamics derived from the classical averaging theory. In linear and Gaussian case without temporal scale gap, we will show that the proposed closure model using the non-Markovian transition density with a very long memory yields an accurate estimation of the nontrivial autocovariance function for the relevant variable of the full dynamics. Supporting numerical results on instructive nonlinear dynamics show that the proposed approach is able to replicate high-dimensional missing dynamical terms on problems with and without separation of temporal scales.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the long-standing issues in modeling dynamical systems is model error arises from incomplete understanding of the physics. The progress in tacking this problem goes under different names depending on the scientific fields. In applied mathematics and engineering sciences, some of these approaches are known as the reduced-order modeling, which ultimate goal is to derive effective models from the first principle, assuming that the full dynamics is known. They include the Mori-Zwanzig formalism [42, 49, 50] and its approximations [4, 5, 11, 15, 23]; the averaging/homogenization when there are apparent scale separation between the relevant and irrelevant variables [9, 40, 41, 47]. In domain sciences, various methods for subgrid-scale parameterization

were proposed to handle the same problem that arise in applications such as material science, molecular dynamics, climate dynamics, just to name a few. They include the Markov chain type modeling

[6, 21]; stochastic parameterization [1, 6, 16, 27, 30, 32, 35, 48]; superparameterization in cloud modeling [12, 22, 34] and in combustion problems [18, 19]; Direct-Interaction Approximation (DIA) for parameterizing sub-grid scale processes in isotropic turbulence [24] and its extensions [8], for modeling non-Markovian memory in inhomogeneous turbulence over topography. We should point out that this list is incomplete and these approaches share some commonality despite being developed independently and have different implementation details. Namely, the key unifying theme in these aforementioned methods is the parametric modeling assumption with a specific class of functions/distributions and with finite number of parameters.

In this paper, we consider a nonparametric modeling framework to compensate for the missing dynamical components. In our setup, suppose that the underlying full dynamics is an ergodic system of Itô diffusion with relevant components and irrelevant components . The objective is to predict the evolution of and its statistics, given only the component of the dynamics,

(1)

and a historical data set . In (1), and denote the component of the drift and diffusion terms, respectively, and denotes the standard Wiener process.

While the core of the problem is similar to those considered in the reduced-order modeling framework, the fact that we have no knowledge of the full dynamics prohibits us to derive an effective equation from the first principle. Motivated by the practical applications where the underlying dynamics are not fully understood, instead, we will use the available historical data to reconstruct the missing dynamical components. We should point out that the restriction of knowing historical measurement of the irrelevant component, , can be relaxed in some cases. When is the only available measurement, one can use for e.g., likelihood maximum estimate [25, 31] in the deterministic case or an adaptive Bayesian filtering [2] (when is constant and the training data is noisy) to extract the “identifiable" components of . By identifiable components, we refer to variables that depend on that appear in and , as we shall see in our numerical examples. Abusing the notation, we will denote as the identifiable components. We will clarify this notion in our numerical examples.

Our approach is to construct a nonparametric representation for conditional density from the pair of historical time series with time lag , where we have defined with and for some integer . When , has only components (similarly for , has only components). Given a discrete approximation of this transition density, the proposed reduced-order model (or closure/parameterization) is given by,

(2)

where . Notice that if the component is slow and the missing component is fast with scale gap denoted by a small parameter , the construction is essentially the effective dynamics deduced by the averaging theory [20, 26, 43] when is replaced by the invariant density of the fast dynamics for a fixed , if such density exists. In this specific situation (fast-slow system), by setting , that is , our approach effectively closes the dynamics by averaging over , where denotes the invariant density of the full dynamics. We will show that averaging over is consistent with averaging over up to order . In general case where there are no separation of scales, the choice of will be problem dependent. When and/or are strictly positive, we refer to the conditional density as a non-Markovian transition density. In this situation, the predictive skill of certain statistics will depend on specific choices of . For example, in linear and Gaussian case without scale gap, we will show the existence of a non-Markovian transition density which allows (2) to accurately estimate one-point and two-point statistics of the -components of the full dynamics, regardless of the time scale gaps.

The main idea in this paper is to consider a nonparametric representation for

using the theory of kernel embedding of conditional distribution, which was introduced in the machine learning community

[45, 46]. In a nutshell, the kernel embedding of conditional distribution represents the density with functions of the reproducing kernel Hilbert space (RKHS), . When is equipped with an orthonormal basis of appropriate space, any can be represented as , where the coefficients in will be pre-computed using the historical data points. In this paper, we will consider parametric orthonormal basis functions such as the Hermite polynomials for low-dimensional as well as the proper orthogonal decomposition (POD) modes for high-dimensional . In the latter case, we shall see that the resulting integral terms in (2

) is a parametric model that is well-known, namely the linear non-autonomous autoregressive models. In general, the form of the parametric closure model depends on the ansatz of

as a function of . We should point out that one can also leave it entirely nonparametric by using the data-driven basis functions constructed by the diffusion maps algorithm as in [3, 17]. While this is ideal, the construction of such basis functions requires an elaborate computational effort and is limited to problems with intrinsically low-dimensional . In addition to constructing the basis, the main computational cost arises when we need to evaluate the estimated basis functions on new points for future-time prediction. Given these constraints, we will not explore the data-driven nonparametric basis in this paper.

The remaining of the paper is organized as follows. In Section 2, we briefly review the theory of kernel embedding of conditional distribution for estimating using an orthogonal basis representation and discuss the proposed closure models in detail. In Section 3, we provide an intuition for choosing the density by discussing missing dynamics in a linear and Gaussian dynamics with and without temporal scale gaps. In Section 4, we numerically demonstrate the proposed approaches on two nonlinear high-dimensional test problems, where in the first example is low-dimensional and in the second example, is very high-dimensional. In Section 5, we conclude the paper with a brief summary and discussion. We include an Appendix that shows the consistency of the proposed approach in estimating autocovariance functions in linear and Gaussian case without scale gap.

2 A nonparametric formulation of modeling missing dynamics

In this section, we first review the kernel embedding of conditional formulation introduced in [45, 46], formulated using orthonormal basis of appropriate square-integrable function spaces as in [3, 17]. Subsequently, we present the proposed nonparametric modeling approach for missing dynamics.

2.1 Kernel embedding of conditional distribution

Let be a compact set and define

to be a kernel. The eigenfunctions

corresponding to eigenvalue

of the following integral operator,

(3)

form an orthonormal basis of and the kernel can be written as,

Let be a reproducing kernel Hilbert spaces (RKHS) induced by such kernel, that is, subspace of with the reproducing property corresponding to inner product defined as , where and . Then for any and , we can represent

with basis of . Analogously, we define be a RKHS for functions of , which can be represented by orthonormal basis .

Let and

be random variables on

and , respectively, with conditional distribution . The theory of kernel embedding of conditional distribution [45, 46], implemented using the bases above [3, 17] can be described as follows. The conditional density function , where , can be represented as,

(4)

where the components of the expansion are given as,

and the expectations

 are taken with respect to the sampling densities of the training dataset. Notice that this representation can be understood as a linear regression in infinite-dimensional spaces with respect to basis functions

and . The representation in (4) is nonparametric in the sense that we do not assume any particular distribution for the density.

Given pairs of data , where , we can estimate these coefficients via Monte-Carlo averages:

(5)
(6)

We should point out that if the weight in is the sampling density of the data in , since is orthonormal under the corresponding inner product, then

is an identity matrix. While representation on this Hilbert space is desirable, finding the corresponding orthonormal basis for high-dimensional

is computationally challenging. In addition to constructing the basis, the main computational cost arises when we need to evaluate the estimated basis functions on new points for future-time prediction as shown in the next section. To avoid these expensive computations, we will adopt simpler basis functions, namely the Hermite polynomial basis for low-dimensional and the proper orthogonal decomposition (POD) basis for high-dimensional .

2.2 Modeling the missing dynamics

Given the pre-computed conditional density in (4), the closure modeling approach proposed in (2) requires estimating the following statistical quantities,

(7)

In the discussion below, we will just focus on the expectation of (the calculation for the expectation of will be similar). In our formulation, we set the weight in the Hilbert space to the the sampling density of the data in . In particular, substituting (4) into (7), we obtain,

(8)

where

(9)

can be pre-computed. In this derivation, the second line is due to Monte-Carlo average using data , the fourth line is using (5), and the last line is due to the truncation in the summation of the index up to order , and using the fact that,

whenever is orthonormal in , where the weight is exactly the sampling density of . Since the resulting coefficients in (9) are independent to , in practice, we only need to choose the basis .

Notice that the resulting averaged quantity in (8) arises from the proposed nonparametric formulation in (4) is a parametric model, where the parametric ansatz is determined by how depends on . For example, when is low-dimensional, we will consider Hermite polynomial basis functions for in a numerical example in Section 4.1. In this case, the resulting parametric model is a polynomial of degree and the coefficients in are directly estimated via the kernel embedding formula.

We should point out that when we use Hermite polynomial basis, we set the weight to be Gaussian with empirical mean and covariance determined empirically from the training data . In our numerics, we also employ a regularization replacing in (9), with small parameter to compensate for the conditional density that is not in (as suggested in [45, 46]). Basically, this regularization is the penalty of not building the appropriate RKHSs that respect the sampling distribution and geometry of the data.

For high-dimensional , we will consider using the proper orthogonal decomposition (POD) as a basis for . Conceptually, this choice of basis corresponds to using an empirical covariance as the kernel in (3) (see e.g., Chapter 5 of [14] for more detail discussion). Computationally, define a matrix , where the th row consists the training data and , such that its row sum is zero. In this case, the basis functions will be defined as the th column of the orthonormal matrix ,

(10)

where

is the singular value decomposition (SVD). These basis functions are called the Proper Orthogonal Decomposition (POD) modes or a discrete version of the Karhunen-Loève basis expansion (see e.g., Chapter 5 of

[14]).

From the orthonormality of , we have such that Eq. (8) can be further simplified,

(11)

where we used basis functions. Suppose that , then Eq. (11) can be equivalently rewritten in a matrix form as,

(12)

where the matrix , is the training data with being the dimension of , and the matrix is the basis evaluated at a new point . Substituting Eq. (10) into the conditional expectation (12), we obtain

(13)

The formula in (13) is exactly a linear regression between observations and . This means that the nonparametric RKHS representation reduces to the parametric linear regression when POD basis are used to represent functions defined on the space. In the case where , , the resulting closure model in (13) is nothing but a linear autoregressive model for variable with a linear non-autonomous variable .

While the POD representation is convenient for high dimensional problems, we should point out these basis functions may not be adequate for systems with nonlinear and/or non-Gaussian nature. In fact, we will show in Section 4.2 that the POD basis representation is not sufficient to recover the missing terms in a nonlinear system even when the invariant density is close to Gaussian. In this case, we will find that an additional noise term can be used to compensate for the residual space (orthogonal to POD).

3 A linear and Gaussian example

In this section, we provide an intuitive argument for the choice of conditional density function in compensating the missing dynamical terms as proposed in (2). Specifically, we will build our intuition for choosing variables by studying the missing dynamics in an analytically tractable Gaussian linear problem with and without temporal scale gaps. That is, we consider a linear multi-scale dynamical model,

(14)
(15)

for a slow variable and a fast variable [10]. Here, and are independent Wiener processes. The parameters and the eigenvalues of the matrix

are strictly negative, to assure the existence of a unique invariant joint density . The parameter  characterizes the time-scale separation between variables and . Moreover, we assume the coefficient

(16)

to assure that the leading-order slow dynamics supports an invariant measure . In the limit of , the leading-order dynamics,

(17)

with as defined in (16

), is obtained by averaging the slow component of the vector field,

, with respect to the invariant density of the fast dynamics in (15) for a fixed . For this simple example, it is clear that . The effective equation in (17) is deduced using the averaging theory [20, 26, 43], which basic idea is to approximate the density of the full dynamics as,

(18)

where denotes the evolution density corresponding to leading-order dynamics. First, we should point out that when the fast dynamics in (15) is not available, we have no information about the invariant density and we also cannot generate samples of this density. In our example above, is not computable since and are unknown.

The proposed model for the closure is motivated by the following observation. Taking in (18), the invariant density of the full dynamics can be approximated by that of the leading-order dynamic up to order-, that is, . Therefore,

(19)

This equation basically suggests that we can approximate with the following conditional density . For the simple linear example in (14)-(15), one can solve the Lyapunov equation of the full system for the equilibrium covariance matrix and deduce

. Expanding the mean and variance statistics in terms of

, we obtain

which means that the order- expansion error in (19) is in the sense of the mean and variance.

Averaging the slow Eq. (14) with respect to this conditional density, , we obtain a closure model of the form (17) with

which means the proposed closure obtained by averaging over is consistent (up to order- error) with the reduced model obtained from the classical averaging theory. In general, such an analytical expression will not be available and we will approximate the conditional density, , by applying the kernel embedding of conditional density theory discussed in the previous section on the training data set . In this case, it is clear that is the natural choice. In the remainder of this paper, we will refer to this closure model as the “RKHS ".

When there is no scale gap, i.e., is large, the approximation via the averaging theory is not valid and therefore averaging over will not work. In this case, let us consider such that the closure model is an average over a non-Markovian conditional density function . That is,

(20)

where the conditional average is evaluated at new data point for time lag interval , resulted from integration of this model. Since the random variables of and of are both Gaussian with mean zero and covariance,

(21)

we can deduce that

(22)

When the covariance components and are empirically estimated from the training data, notice that (22) is identical to the conditional expectation with respect to the kernel embedding conditional density formulated using the POD basis in (13). More importantly, one can analytically show that the autocovariance function (ACV) of the proposed non-Markovian model in (20) with agrees with the ACV of the component of the full model (see Appendix A for the detailed proof of this statement). The consistency of the ACV prediction as well as the closure in (22) with the RKHS formulation in (13) justifies the choice of when is large. In the numerics below, we will verify the robustness of the non-Markovian closure model resulted from this choice of in terms of the short-time prediction skill and ACVs for any .

Figure 1: (Color online) (a) Supremum errors as functions of parameters , where with . Here, are solutions of the full model (14)-(15) and are solutions of the closure models. Trajectories are averaged over 100 realizations. The parameters are , , and . When is small, the solutions of all the closure models are pathwise convergent nearly on the order of . (c) Comparison of RMSEs averaged over 1000 realizations for large regime. Comparison of ACVs for (b) the small regime and (d) the large regime.

In Figure 1, we compare the proposed closure model in (22) , which we will refer to as “RKHS ", with the standard averaging model in (17) with and the RKHS . In this numerical simulation, we build the closure models using simulated data at discrete time step . When is small, one can observe the pathwise convergence of solutions of the full model (14)-(15) to those of the closure models [Fig. 1(a)]. For small , the ACVs of all the closure models are in good agreement with the ACV of the full model [Fig. 1(b)]. These results agree with the invariant manifold theory the small regime [44]. However, when is large, the predictions and ACVs become quite different among the three closure models [Figs. 1(c) and (d)]. In term of short-time predictions, the closure model (20) with memory terms provides a slightly better RMSE than the other two closure models [Fig. 1(c)]. In term of long-time statistics, only the closure model (20) with long memory terms produces an accurate approximation of the ACV, whereas the other two closure models do not [Fig. 1(d)]. This ACV consistency can be verified explicitly as we mentioned before (see Appendix A).

The analysis over this simple example shows that the proposed modeling framework using the kernel embedding of conditional density formulation provides accurate short-time predictions and consistent long-term statistical recoveries in the limit of the memory length . This consistency is robust whether the underlying full system has or does not have any temporal scale gap. Using this result as a guideline, a natural extension for compensating missing components in nonlinear systems is to consider , that allows for the missing dynamical components to also depend on the history of in addition to that of . In practice, the key parameters which will be determined case-by-case are the memory length, and , as we shall see in the nonlinear examples in the next section.

4 Nonlinear examples

In this section, we study the short-time predictions and long-time statistical properties of two nonlinear examples: the Lorenz-96 (L96) model [29] with a short memory effect and the truncated Burgers-Hopf (TBH) model [37, 33, 36, 39] with a long memory effect.

4.1 Two-layer Lorenz-96 model

Consider the two-layer Lorenz-96 (L96) model [29],

(23)

for and , where each relevant variable is coupled to irrelevant variables , and

(24)

The indices of the variables and are cyclic, . The parameters are taken to be , , , , and [6]. The parameter characterizes the time scale separation between variables and . In this example, we will show results of a small regime and a large (the large regime was studied in [6, 7, 27, 32]). We integrate the full L96 model using a 4th-order Runge-Kutta method for time units with a time step . We observe the trajectories of the variables every 10 time steps, that is, the observation time step is and the dataset contains observation points.

In the following numerical simulations, we compare the proposed closure RKHS model with the deterministic parametric formulation suggested by Wilk’s method [48]. In particular, the Wilk’s deterministic parameterization scheme is a closure model obtained by fitting the data with the following polynomial,

(25)

We should point out that if we are restricted to only observing , then are the identifiable components that can be extracted, for e.g., using a likelihood maximum estimate [25, 31, 48] or an adaptive Bayesian filtering [2], as we pointed out in the introduction. The key point is that we cannot extract the detail components if the fast dynamical components in (23) is unknown and, in fact, we are not interested to construct the closure by averaging over conditional density that depends directly on . Instead, we will consider a closure model based on averaging over the following conditional density for small , where and . For the large regime, we will consider . While conditioning to other variables (e.g., spatial neighbors of or or longer temporal history) can be considered, we do not find any meaningful improvement over the results that are presented below. These densities will be constructed using the kernel embedding discussed in Section 2 for each ; connecting to the notation in the previous section, and is either or . Since these densities have low-dimensional conditional variables, we will represent the kernel embedding formula in (4) using the Hermite polynomials.

Figure 2: (Color online) Long-time statistics and short-time predictions for the small regime of the L96 model. (a) The yellow dots are the scatter plot of vs. for the full L96 model. The black dots are the fifth-order polynomial fit used for the deterministic parametrization of using Wilks’s method [48]. The green squares are the closure model using the transition kernel . Comparison of (b) PDFs and (c) ACFs among the full L96 model and the closure models. (d) Comparison of RMSEs from ensemble averages. The number of ensembles is 1000 where each ensemble corresponds to an initial state.

To validate the proposed approach, we compare the long-time statistics and short-time predictions of the -components of the full model and the closure models. Particularly, we compare several standard long-time statistical quantities as in [6, 31]:

  • The probability density function (PDF) for

    .

  • The autocorrelation function (ACF) for , , where denotes the temporal average over data point.

  • The cross-correlation function (CCF) between and , .

  • The mean wave amplitude , for , where

    is the Fourier transform of

    .

  • The wave variance .

For the PDFs, ACFs, and CCFs, we plot the average over all . For small , we only show the results for the PDFs and ACFs. For short-time predictions, we calculate the root-mean-square error (RMSE) and the anomaly correlation (ANCR), where the RMSE measures the difference between the true trajectory and the forecast trajectory whereas the ANCR measures the correlation between them [6]. The definitions of RMSE and ANCR are the same as those in Ref. [6]. We take the average using the data from different ensembles, each starting from a different initial state over five time units.

We first study the small  regime of the L96 model. Figure 2(a) displays the scatter plot of vs. for the full L96 model, the polynomial fit (25) for the deterministic parametrization of (Wilks’s method), and the expectation using the RKHS representation (method referred to as the RKHS ). For long-time statistics, one can see from Figs. 2(b) and 2(c) that the PDFs and ACFs for can be well reproduced by both closure models. For short-time predictions, one can see from Fig. 2(d) that the RKHS can provide a better approximation of the trajectory than the Wilks’s deterministic parametrization scheme. These results can be expected due to the validity of the classical averaging theory on dynamical systems with time-scale separation (small  regime) [44].

Figure 3: (Color online) Long-time statistics and short-time predictions for the large regime of the L96 model. (a) The yellow dots are the scatter plot of vs. for the full L96 model. The green squares are the fifth-order polynomial fit using Wilks’s method [48]. The red asterisks and black crosses correspond to the closure models using the transition kernels and , respectively. Comparison of (b) PDFs, (c) ACFs, (d) CCFs, (e) mean wave amplitudes, and (f) wave variances among the full model and closure models.

We now study the L96 model for the large regime in which there is no significant time-scale separation between the relevant variables and irrelevant variables . By comparing Fig. 2(a) and 3(a), one can see that the patterns of the scatter plots differ substantially between the small and large regimes. Specifically, the scatter plot for the large regime is much broader than that for the small regime. This indicates that when is small, the irrelevant (fast) variable significantly relies on the relevant (slow) variable. When becomes large, such dependence of irrelevant variable  on the relevant variable reduces.

For large , one can observe from Fig. 3(a) that the RKHS representation of can nearly reproduce the scatter plot of the full model, whereas the Wilks’s deterministic parametrization scheme and the RKHS representation  cannot. The PDFs for of the full model can be reproduced by all the closure models [Fig. 3(b)]. For the other long-time statistics, ACFs, CCFs, mean wave amplitudes, and wave variances can be well reproduced only by the closure model with the transition kernel [Figs. 3(c)(d)(e)(f)]. Notice also the significant improvement in terms of short-time prediction (smaller RMSE and higher ANCR) over the Wilks’s method and the RKHS as shown in Fig. 4.

Figure 4: (Color online) Comparison of (a) RMSEs and (b) ANCRs for the large regime. The number of ensembles is 1000 where each ensemble corresponds to an initial state.
Figure 5: (Color online) Rank histograms for closure models with ensemble members at lead time . Ideally, the rank histogram is nearly flat. The rank histogram of the closure model using is close to be flat, whereas rank histograms of Wilks’s method and the closure model using exhibit U-shape distributions.

To determine the reliability of the ensemble forecasts, we also calculate the rank histograms from ensemble integrations [13]. A rank histogram is obtained by repeatedly tallying the rank of the true observation relative to the sorted -member ensemble [13]. We use the same method as in [6]. For every initial state , we do integrations of the closure model over the lead time units starting from the

plus a small random perturbation. The random perturbations are Gaussian distribution with mean zeros and standard deviation

. We sort the values for  for each grid point and time from the ensemble members and the full L96 model. Figure 5 displays the rank histograms for all the closure models with at lead time . An ideal rank histogram is flat. One can see that the rank histogram by the RKHS is close to be flat, whereas rank histograms by Wilks’s deterministic parametrization scheme and the RKHS exhibit U-shape distributions. Therefore, the closure model with performs better than the other two closure models.

4.2 The truncated Burgers-Hopf (TBH) model

Consider the truncated Burgers-Hopf (TBH) model [37, 33, 36, 39], which is described by a quadratically nonlinear equation for complex Fourier modes, , with for ,

(26)

This model is a Galerkin truncation of the inviscid Burgers equation on Fourier modes and we should point out that the dynamics of the truncated system is totally different from the inviscid Burgers equation. Particularly, the TBH exhibits intrinsic stochastic dynamics with ergodic behavior in a large deterministic system [37, 33, 36, 39]. We are interested in estimating the TBH model’s first Fourier mode given only the dynamical component of this mode,

(27)

where denotes the second Fourier mode and represents the forcing component obtained by subtracting from the right hand side of Eq. (26), that is,

(28)

While and may be identifiable from observing alone, in our experiment below, we assume that we are given the data set of . We should point out that this model has an equipartition energy, that is, all of the Fourier modes in TBH have the same variances, and the first Fourier mode (which is of our interest) possesses the longest autocorrelation time and the largest statistical memory [38], which makes this example a tough test problem.

To compensate for the missing dynamics in (27), we substitute the irrelevant variables and with their conditional expectations. The transition kernel is , where the irrelevant variable is one of , and the relevant variable is one of such that and are both real parts or both imaginary parts. In particular, we construct , , etc. For the force , an additional Gaussian noise term is added to compensate for the residual space. Since the conditional states are high-dimensional (when are large), the conditional expectations over these transition densities are constructed using the RKHS formulation using a POD basis as in (13).

Figure 6: (Color online) Long-time statistics and short-time predictions for the TBH model. Comparison of PDFs of (a) and (b) . Comparison of ACVs of (c) and (d) . Comparison of (e) RMSEs and (f) ANCRs. The closure models use the transition kernels