SDE_Importance_Sampling
None
view repo
We study a class of importance sampling methods for stochastic differential equations (SDEs). A smallnoise analysis is performed, and the results suggest that a simple symmetrization procedure can significantly improve the performance of our importance sampling schemes when the noise is not too large. We demonstrate that this is indeed the case for a number of linear and nonlinear examples. Potential applications, e.g., data assimilation, are discussed.
READ FULL TEXT VIEW PDF
We exploit the relationship between the stochastic Koopman operator and ...
read it
This paper presents a novel approach to numerically solve stochastic
dif...
read it
We consider systems of slowfast diffusions with small noise in the slow...
read it
We show how to generate random derangements with the expected distributi...
read it
We show how to generate random derangements with the expected distributi...
read it
Density tracking by quadrature (DTQ) is a numerical procedure for comput...
read it
We analyzed the convergence properties of likelihood weighting algorith...
read it
None
Consider a stochastic differential equation (SDE)
(1.1) 
where and is dimensional Brownian motion. Suppose we make noisy observations of the system at times (, fixed), obtaining a sequence of measurements where () is the quantity being measured (the “observable”),
are independent identicallydistributed (IID) random variables modeling measurement errors and
. What is the conditional distribution of for given? This problem of “nonlinear filtering” or “data assimilation” arises in many applications; see, e.g.,
[7, 8, 5, 27]. A variety of algorithms have been developed to address it, but efficient data assimilation, especially in highdimensional nongaussian problems, remains a challenge [25].This paper concerns an approach to data assimilation known as “particle filtering” (see, e.g., [8] for more details) based on sampling
the conditional distributions. We present an asymptotic analysis of certain sampling algorithms designed to improve the efficiency of particle filtering, and, based on this analysis, we propose a general way to improve their performance. The analysis relies on taking a smallnoise limit, but the algorithms do not require a small noise to operate (but may not be as efficient when the noise is not small). We focus on
one step of the filtering problem, i.e., we set in the above, as this is sufficient to capture the computational difficulty we wish to address. For simplicity, we assume , where is a scalar and is the identity matrix; we also assume is a scalar. These assumptions can be relaxed if needed.To take one step of particle filtering, one begins by discretizing Eq. (1.1) using, e.g., the Euler scheme, to obtain
(1.2) 
where , the are IID standard normal random variables. A straightforward application of Bayes’s Theorem tells us that the conditional distribution of interest satisfies
(1.3) 
One then tries to design a Monte Carlo algorithm to generate discretetime sample paths from Eq. (1.3), conditioned on the observation . We refer to the distribution in Eq. (1.3) as the target distribution. They are the discretetime analogs of the conditional distributions introduced above, with observation.
Without the last term in the exponent in Eq. (1.3), the target distribution is just the distribution of the discretized SDE, and one can generate sample paths by carrying out the recursion in Eq. (1.2). When the last term is included, however, it is generally not feasible to sample directly from the target distribution. A solution to this problem is importance sampling: instead of drawing samples from the target distribution, we draw sample paths from an approximation , usually called the “proposal distribution”. Any statistics we compute based on sample paths from will be biased. We compensate for this bias by associating a weight to the th sample path , with , so that the weighted sample paths again have the correct statistics (in a sense we make precise later).
Weare and VandenEijnden [28, 29] proposed an algorithm for sampling distributions like Eq. (1.3
). They showed that their algorithm is efficient in the sense that in the limit of small dynamical and observation noise, the relative variance of the weights vanishes (see
[29] for precise definitions and statements). The basic idea of the sampler is to look for the most likely sample path of the target distribution (1.3) and use this information to modify the dynamics so that samples from the proposal remain close to the target distribution. In this paper, by a combination of formal asymptotic analysis and numerical examples, we show that a symmetrization procedure proposed in [17] can be applied to SDEs to improve the efficiency of importance samplers. The symmetrization and “small noise analysis” has also been discussed in the context of implicit sampling [6, 23], see [17].While our primary motivation here is data assimilation for SDEs, our symmetrization procedure may be effective for sequential Monte Carlo sampling of more general types of systems. As well, the class of importance sampling algorithms studied here are closely related to algorithms proposed in [12, 13, 10, 9, 11] and in [28] for sampling “rare events” in SDEs, though there are some significant differences between the two applications. We plan to explore some of these connections in future work.
The remainder of this paper is organized as follows. We state our main results in Section 2. Section 3 briefly reviews the linear map method and its symmetrization, as well as the small noise theory (see [17]). We explain a new sampling method, the dynamic linear map, in Section 4. We study its efficiency in the small noise regime and show how to use symmetrization to improve its efficiency in small noise problems. Several numerical examples are provided in Section 5 that illustrate our asymptotic results as well as the efficiency of our dynamic approach in multimodal problems. The continuous time limit of the dynamic linear map is discussed in Section 6 and we present conclusions in Section 7.
We now formulate the problem more precisely and summarize our key findings. We consider a discretized SDE in the small noise regime
(2.1) 
where corresponds to a numerical discretization of (for most of this paper, we assume the Euler discretization ), and is the “small noise parameter”. Throughout this paper we assume that the
dimensional vector field
is smooth, and that the process starts at a given initial position and proceeds for time steps of size each. The transitions are made with independent gaussian samples . We denote the path as , a sequence of positions , and its likelihood in the process with the path distribution .The observation of the state at time gives rise to the likelihood
(2.2) 
where is assumed to be a smooth, nonnegative function. For example, for observations , , we have . Hereafter we will sometimes refer to as the “loglikelihood,” in a slight abuse of standard terminology. By Bayes’s Theorem, the target distribution then has the form
(2.3) 
Importance sampling methods generate samples using a proposal distribution , and attach weights
(2.4) 
to each sample, so that the weighted samples can be used to compute unbiased statistical estimates with respect to the target distribution. To measure the efficiency of the sampling methods, we evaluate the relative variance of the weights
(2.5) 
Here the expected values are computed with respect to the proposal distribution . This relative variance
is connected to a standard heuristic called the “effective sample size,” defined by
(2.6) 
where is the number of weighted samples (see, e.g., [4, 21, 8]). The effective sample size is meant to measure the size of an unweighted ensemble that is equivalent to the weighted ensemble of size . All else being equal, the smaller the , the more efficient the importance sampling algorithm, and if all the samples were independent, we would have and . The quantity is convenient because it is not tied to any specific observable; recent work (see [1]) has also given it a more precise meaning. Other quantities that can assess effective sample sizes are discussed in [22]. We note that in practice, and are only known up to a constant. The algorithms we describe do not require knowing the normalization constants. Likewise, is invariant under rescaling of or by a constant.
We study two types of importance sampling methods in this paper. The first method, called the “linear map” (LM), uses a gaussian proposal distribution centered at the most likely path. The second method, called dynamic linear map (DLM), reapplies the linear map after each time step between and given the previous moves. Note that the linear map can be viewed as a version of implicit sampling [6, 23] applied to the path distribution of an SDE. The dynamic linear map applies this implicit sampling step repeatedly to transition densities and is also closely linked to the continuous time control method of Weare and VandenEijnden [28, 29] (see also Section 6). For each method, we perform a symmetrization and exploit symmetries of the proposal distributions to increase sampling efficiency. Symmetrization was previously studied for the LM in a more general context in [17]. Here we adapt this procedure to problems involving SDE and to the dynamic linear map. Following the approach taken in [17], we show that under suitable assumptions (see Section 4), the relative variances of the various methods are as follows:
Method  scaling 

Linear Map (LM)  
Symmetrized LM  
Dynamic LM (DLM)  
Symmetrized DLM 
We also present examples showing that the leading coefficient of the DLM can be smaller than that of LM, suggesting that DLM may be more effective in some situations (see Section 5). We discuss the continuous time limit of LM and DLM for scalar SDE, and calculate the leading coefficient of in an asymptotic expansion in . In doing so, we show that, under additional assumptions, the sampling method discussed in [28] is recovered in the limit of the DLM (see Section 6).
The expansions we will consider are formally justified as the relevant quantities, e.g., relative weight variance, are gaussian integrals.
The insertion of the small noise parameter into the problem is mainly to enable asymptotic analysis. In specific problems, there is not always an identifiable small parameter, and in any case our methods do not require a small parameter to operate.
We simplify notation and write , and , and consider the small noise target distribution defined in (2.3) which can be written as , where
(3.1) 
for , a scalar function as in (2.2). If we assume that has a unique, nondegenerate minimum, and let
(3.2) 
i.e., is the optimal path with prescribed initial condition , we can employ Laplace asymptotics to expand the target distribution around . (See, e.g., [24] for a general formulation of Laplace asymptotics.) After a change of variables
(3.3) 
the expansion is
(3.4) 
where is the Hessian evaluated at , are the higher order terms in the Taylor series. Here and below, we use the shorthand , and similarly write for etc. Note that while we will continue to refer to as a “path” after the change of coordinates, is the actual solution of Eq. (2.1).
The small noise analysis of LM, and other methods to follow will make frequent use of this expansion, as well as the “variance lemma” (see [17]).
(Variance Lemma) For a function that can be expanded in at least to the terms
(3.5) 
the relative variance of
with respect to a probability density
is(3.6) 
The proposal distribution of the linear map (LM) sampling method, summarized in Algorithm 1, is gaussian and proportional to
(3.7) 
The weights are the ratio of target and proposal distribution, and can be expanded as
(3.8) 
Using the variance lemma we thus find that
(3.9) 
i.e., the relative variance of the weights is linear in (see [17] for more details).
It is shown in [17] that the linear map can be “symmetrized" to improve the scaling of from linear to quadratic in
. This stems from the observation that the leading order term in the weight is an odd function with respect to the random variable
, whose probability distribution function is even. The symmetrized sampler uses a proposal distribution which reweights equally likely samples from the gaussian distribution of the linear map such that the resulting weights have even symmetry. The odd leading order terms in the weight expansions then cancel, which results in a quadratic scaling of
in .Specifically, the symmetrized linear map draws a sample from the proposal distribution . It returns with probability , and with probability , where
(3.10) 
Samples generated in this way have a nonsymmetric distribution, but even weights:
(3.11) 
The Taylor expansion of the symmetrized weight is
(3.12) 
which, together with the variance lemma shows that
(3.13) 
The symmetrization therefore improves the linear scaling of in of LM, to a quadratic scaling of for SLM (see [17] for more details).
The linear map can be efficient when the hypotheses underlying its derivation are satisfied, i.e., when the pathspace distribution is unimodal and a gaussian approximation is appropriate. However, when there are multiple modes, LM can become inefficient. To see how this might happen, consider the simple random walk
(4.1) 
i.e., where is standard Wiener process. Suppose we have a bimodal likelihood function whose graph is as shown in Figure 1; this type of situation can arise when multiple states can give the same measurement, so that observations may have ambiguous interpretation. In this case, the high probability paths will be those that reach the vicinity of at ; effectively, the high probability paths are sample paths of Brownian motion, conditioned to be near at . The probability of this occurring by chance is exponentially small as , and direct sampling is unlikely to ever produce such a path.
A straightforward calculation shows that the optimal path approaches a straight line in the plane as , going to the right bump if , to the left if (and undefined if ). With a bimodal likelihood function, the target distribution is bimodal as well. If the initial condition is sufficiently to the right of , one of the two modes will dominate, and LM can be expected to be effective. As moves closer to , however, the other mode will begin to make a greater contribution; at , the two modes carry exactly the same weight. But LM will always pick the mode on the right when , no matter how close is to . So LM will produce essentially no sample paths going to the left, leading to a large weight variance. See Section 5 for detailed numerical results.
This is a wellknown problem with importance sampling algorithms. Similar issues arise in rare event simulation, and a standard solution is to dynamically recompute the optimal path. See, e.g., the discussion of Siegmund’s algorithm in [2]. In our context, this leads to an algorithm we call the dynamic linear map, which is similar to the algorithms proposed in [28, 13]. We will also discuss symmetrization in this context.
Roughly speaking, the dynamic linear map (DLM) consists of computing the optimal path starting from the current state , taking one step (so that ), then repeating. See Algorithm 2 for details. The DLM thus requires redoing LM at every step, and is therefore more expensive.^{1}^{1}1Suppose each cost function evaluation requires CPU time , the number of steps, and each optimization requires function evaluations. Then all else being equal, LM has running time and DLM . However, it can avoid some of the issues arising from multimodal target distributions. One can see this heuristically in the above example (Section 4.1): suppose we start with slightly to the right of , so that the optimal path goes to the right bump. After a few steps, we may end up in a state closer to the left bump. At this point, the DLM would start steering the sample path towards the left bump. Unlike LM, repeated sampling using DLM would yield sample paths that end at both the left and the right bumps (see Section 5.1).
To make use of DLM, we need an expression for the associated weights. This, in turn, requires an expression for the proposal distribution associated with DLM, which one can derive by first noting that in general, transition densities are marginals of the pathspace distribution:
(Here we abuse notation slightly and use and to denote both pathspace distributions as well as their marginals.) The DLM transition density arises from making a gaussian approximation of the target distribution at each step, then taking its marginal. This leads to
(4.2)  
Here is the optimal path from to and we omit its dependence on for readability of the equations; we also remind the reader that is a path. We denote the Hessian of evaluated at the optimal path by . We view a path from to as a point in , arranged in blocks of entries. Accordingly, the matrix can be viewed as an element of and can be subdivided into blocks of dimension each. The matrix in Eq. (4.2) is , the first block of the inverse of the Hessian (after rescaling).
In Algorithm 2, going from step to requires optimizing over the remaining variables in the path. This is done independently at every step and for every sample path. The weights for the proposal distribution of DLM can be calculated as described in Algorithm 2, or as the product of the incremental weights
(4.3) 
Relation to HamiltonJacobi equation and regularity of “value functions”. In the definitions above, it is assumed that is welldefined for all . This is actually not always the case. To see this, consider again the example from Section 4.1. If at some , there are two optimal paths pointing in opposite directions. At this point, because there is not a single optimal path,
is undefined. This behavior is actually rather common, and not at all confined to the Brownian motion example. It is closely connected with regularity of solutions of a partial differential equation of HamiltonJacobi (HJ) type. As we do not make use of the theory of HJ equations in this paper, we do not go into details here. Instead, we provide a brief summary below, and refer interested readers to, e.g.,
[29] or [12, 13, 10, 9], for more information.In the DLM method, the optimal path minimizes a version of the function in Eq. (3.1), but starting with state at time rather than always at time 0. In the limit as , the value function achieved with initial condition at step solves a HJ equation of the form , with Hamiltonian ; this is the Legendre transformation of the FreidlinWentzell Lagrangian [14]. For the HJ equation to be wellposed, one prescribes the final condition that where is the likelihood in Eq. (2.2) and . The HJ equation is then solved backwards in time. The time derivative of the optimal path starting at position and time is given by the gradient of where it is differentiable. At locations where there are multiple optimal paths, the value function is generally continuous but not differentiable. At such singular points , has jump disconinuities (as varies) and is therefore undefined.
Though very much relevant to the efficacy of the type of methods discussed in this paper, the analysis of singularities of HJ equations can be highly nontrivial. As our main goal is to assess whether some version of the symmetrization procedure proposed in [17] can be extended to SDEs, we have opted to focus on the simplest possible setting, leaving more general analysis to future work. For the remainder of the paper, we make the following standing assumption:
is defined everywhere, and is as smooth as needed. 
The analytical results described below should therefore be interpreted as a bestcase scenario. We also note that while the numerical algorithm is unlikely to produce an exactly in the set of singular points in actual practice, the presence of singularities does mean that the performance of the algorithm may be worse than predicted by our analysis. We have therefore designed our numerical examples to test the extent to which the algorithms behave as predicted even when is not differentiable everywhere.
To find the scaling of the relative variance of the weights of DLM with the small noise parameter , we apply the same change of variables as in Eq. (3.3) to each transition density and expand the incremental weights as
(4.4) 
where
(4.5)  
(4.6) 
noting that Eq. (4.4) relies strongly on our standing assumption that is differentiable. Since the weight of a sample is the product of the incremental weights, we have
where
(4.7) 
The scaling of in now follows from the variance lemma:
(4.8) 
Thus, the relative variance of DLM scales linearly in , the same asymptotic scaling as LM. However, we will show in numerical examples below that the dynamic approach can be more effective in practice than LM, especially when the target distribution has multiple modes.
The leading order term in the weight for DLM has an odd symmetry, just like the LM, and a symmetrization procedure can be applied to DLM to improve the scaling of in . The reason is that, at each time step, is generated by a composition of the previous state and a new gaussian sample . While this procedure leads to a proposal distribution that is not necessarily even, the paths are constructed incrementally from gaussian samples which are even.
More specifically, the recursive composition forms a map from the dimensional gaussian to the path , and for every sampled path , there is a path which is equally likely. Following the algorithm described in Algorithm 3, we sample with probability , and with probability , the resulting proposal is a “symmetrized” distribution with even weights (see Eq. (3.11)).
The symmetrized weights can be written in terms of the map as
(4.9) 
Recall the expansion of the weights in (4.4), and note that
since the most likely path can be written in terms of the map as .
If is unique (at each time step), can be expanded around the most likely path as
(4.10)  
(4.11) 
We thus have that
(4.12)  
(4.13) 
which results in the cancellation of the leading order term in of the symmetrized weight
(4.14) 
Applying the variance lemma completes the proof for the quadratic scaling of in
(4.15) 
We now examine a number of concrete examples, both to illustrate the scaling of the proposed algorithms and to test their limitations. The source code for all examples in this section can be found at https://github.com/AndrewLeach/SDE_Importance_Sampling .
We begin with the Brownian motion example from Section 4.1:
(5.1) 
with initial condition and with likelihood for two different choices for . We first consider the case of a unimodal target distribution for which the assumptions made during the small noise analysis are satisfied. We then violate the assumption of a unique optimal path to indicate limitations of DLM and our small noise analysis. For the examples below, the time step is . The observation is collected at step (i.e., ). Computing the optimal paths is straightforward to do analytically and we use the analytic formulas in our implementation of the various samplers.
We first consider a likelihood defined by
The likelihood is asymmetric in and leads to a nongaussian and unimodal target distribution. In this example, the assumptions made in our small noise analysis are satisfied.
We apply LM, SLM, DLM, and SDLM to sample the target distribution over a wide range of , and compute the relative variance for each of these methods. For each and method (LM, SLM, DLM and SDLM), we draw samples . The results are shown in Figure 2.
As can be seen, the results show the predicted scalings for for a wide range of for all four methods: both LM and DLM are , while SLM and SDLM are both . Perhaps this is no surprise, as all assumptions that lead to the small noise theory are valid in this example. We also see that the dynamic methods (DLM and SDLM) have smaller relative variance at each value of , though they also cost more per sample.
Next, we examine
As explained in Section 4.1, this leads to a bimodal target distribution. We fix , and leave all other parameters as above. We apply LM and DLM to compute the finaltime distribution , using (weighted) samples. The results are shown in Figure 3, along with the target distribution .



(a) LM  (b) DLM 
As expected, LM essentially ignores one of the two modes, while DLM captures both modes. As explained before, even though both samplers should reproduce the target distribution in the largesamplesize limit, in practice LM produces almost no sample paths that go to the left bump. In contrast, DLM readily generates sample paths ending at both bumps, leading to a more effective sampling of the target distribution. We have experimented with increasing the sample size for LM, but even the largest sample sizes we consider did not lead to weighted samples that represent both modes.
Finally, note that empirical estimates of are insufficient to detect this problem: even though the true value of for LM should be quite large in this case, empirical estimates of for LM are actually quite small because none of the sample paths go to the left bump. Indeed, for Figure 3, the empirical for LM is , while that of DLM is . The example thus shows that for nongaussian and possibly multimodal distributions, DLM can be more reliable despite the same scaling of .
The scaling arguments for DLM and its symmetrized version rely on the assumption that the most likely path is unique at every time step. We now consider an example for the DLM in which we deliberately violate this assumption. The model is
(5.2) 
This is the Euler discretization of the overdamped Langevin equation We use the loglikelihood
As in the previous example, the optimal path goes to the right bump when and to the left when . At there is no unique optimal path.
The linear drift makes it likely that DLM sample paths encounter the line and the small noise results may not hold in this case. To illustrate the behavior and efficiency of the methods in this situation we perform experiments with varying values of and . Specifically, for a fixed , we take time steps with DLM, starting from initial conditions ranging from to . We compute the averaging number of crossings for each experiment. Figure 4 shows the results as well as the computed values of .



(a)  (b) 
As can be seen in Figure 4(a), the predicted asymptotic scaling of only emerges for small ; the critical value of at which the curve crosses over into the asymptotic regime decreases as approaches , making crossings more likely. Comparing Figures 4(a) and 4(b), we see that the asymptotic regime corresponds to values of small enough that the average number of crossings per sample is near zero. Closer examination of the data suggests that this critical scales roughly linearly with distance of the initial condition to . The example thus suggests that the efficiency of DLM may suffer if one encounters nonunique optimal paths while constructing the proposal distribution sequentially, but the predicted scaling again holds if is small enough.
Finally, we note that even in the preasymptotic regime, the value of are , meaning the effective number of samples is , which is still a significant improvement over direct sampling.
Our second example is a stochastic version of an idealized geomagnetic pole reversal model due to Gissinger [16]:
(5.3) 
(In this section, refers to the th component of a vector .) The
system of ordinary differential equations has 3 unstable fixed points:
and . It has a chaotic attractor on which trajectories circulate around either or many times before making a quick transition to the other fixed point. See Figure 5. Following [16], we refer to these transitions as “pole reversals,” since the second component can be thought of as a proxy for the geomagnetic dipole field, and it changes signs at these transitions.Here, we consider Eq. (5.3) with . We start with an initial condition near , and after steps make an observation with loglikelihood , where . We view as the outcome of a “measurement” made at step .
We consider two cases:
The measured value is near , i.e., on the opposite “lobe” from the initial condition;
is near , i.e., on the same “lobe” as the initial condition.
Figure 5 illustrates the initial conditions, data, and optimal paths for the two cases. Shown are trajectories of the deterministic model (light gray), representing the chaotic attractor. The dashed line is the most likely path with initial condition marked by “” and with measured state at time marked by “”; this trajectory undergoes a “pole reversal” (Case (a)). The solid blue line represents the most likely path with initial condition “” and observation “,” and does not exhibit a pole reversal (Case (b)).
To see how the two cases differ, we fix and apply the LM and DLM to generate sample paths in each case and plot marginals of the proposal distributions at two different times. In Case (a), we plot histograms of the marginal distributions at time as marked by in Figure 5; in Case (b), we plot histograms of the marginal distributions at time as marked by . For each method, the resulting “triangle plot” consists of histograms of the onedimensional marginals, for , and the twodimensional marginals, , of the proposal distributions. The triangle plots are shown in Figure 6. In each panel, the diagonal plots are the onedimensional marginal distributions. The lowertriangular parts of each panel are the twodimensional marginal distributions generated by LM, while the uppertriangular parts show marginals generated by DLM.



Case (a)  Case (b) 
In Case (a), the marginal distributions of the DLM proposal are multimodal, possibly related to the underlying geometry of the strange attractor. In contrast, the LM proposal distribution misses this complexity altogether (as one might expect). Moving now to Case (b), which involves starting and end points on the same lobe connected by a shorter optimal path, the marginals are unimodal, and LM and DLM give more similar answers (though there is still significant deviation from gaussianity in the DLM proposal distribution).
Finally, we vary in Cases (a) and (b) and apply LM, SLM, DLM and SDLM. For each value of , we estimate for each of the 4 methods. The results are shown in Figure 7. Not surprisingly, LM breaks down for Case (a), in which the target distribution is likely multimodal. In contrast, both DLM and SDLM exhibit the predicted scaling. For Case (b), because the target distribution is unimodal, all four methods behave as predicted by the small noise theory.



Case (a)  Case (b) 
The Gissinger model requires attention to numerical implementation when we compute its statistics.We describe our numerical implementation in detail.
Timestepping. The Euler scheme for the Gissinger model requires small time steps because of numerical instabilities. To improve stability, we discretize the drift part of Eq. (5.3) using a standard 4thorder RungeKutta (RK4) method, then adding IID normal random vectors at each step. This yields a model of the form (2.1), where now represents one step of the RK4 scheme. In all the examples shown above, the time step is .
Estimation of . In Figure 7, because of their different variances, we use sample paths to estimate for DLM and for SDLM, and paths for LM and for SLM.
Computing optimal paths. Our methods requires computing optimal paths. For the Gissinger model, we use Newton’s method. Since explicit analytical expressions for the gradient and the Hessian are available, this is relatively straightforward to program. To reduce the (fairly significant) computational cost of computing at each time step, we “guess” a good initialization for the optimization procedure using the solution from the previous time step using the linearized dynamics. See [20] for details.
So far, we have focused on time discretizations of SDEs. A natural question is what happens to the proposed algorithms in the limit . In this section, we sketch some analytical arguments aimed at addressing these questions for scalar SDE. Though restrictive, we believe these results yield useful insights. A more complete and rigorous analysis is left for future work, as it is expected to be more involved.
For scalar SDE, the DLM can be defined through the recursion
(6.1) 
where , , is the optimal path (3.2) with prescribed initial condition , is the th entry of the Hessian of (see Eqs. (3.1) and (4.2)), and the are independent standard normal random variables. Keeping in mind that for all , the above can be written as
(6.2) 
Our goal in this subsection is to sketch an argument suggesting that as , solutions of (6.2) converge weakly [19] to those of
(6.3) 
with . Since we consider “continuous time” and “discrete time” cases, we mark the discrete time case by a superscript (i.e., in this section, the function in Eq. (3.1) is called ). In Eq. (6.3), “” denotes and the path () minimizes the action functional [14]
(6.4) 
This is the continuoustime analog of Eq. (3.1).
Eq. (6.3) was derived in [28] as the proposal for an importance sampling algorithm. This was later used in [29] for data assimilation in the smallnoise regime. We assume minimizers of the action functional are twicedifferentiable in the time parameter and satisfy the EulerLagrange equations; this can be justified via standard results from the calculus of variations (see, e.g., Section 3.1 of [15]). In what follows, we also assume that the action functional has a single global minimum for all initial positions and initial time . This unique optimal paths assumption (the continuoustime analog of the unimodality of ) implies that is defined everywhere. Without unique optimal paths, any analysis will require more care; see, e.g., [28] and references therein for a discussion of these and related issues. The assumption is natural for linear systems with unimodal likelihood functions , and may hold (approximately) in nonlinear systems when is small.
We now sketch our argument. We begin by recalling that a numerical approximation of an SDE converges weakly with weak order if for all test functions with at most polynomial growth,
(6.5) 
as . By standard results in the numerical analysis of SDEs, weak convergence is implied by “weak consistency” plus some mild polynomial growth conditions; see, e.g., Section 14.5 in [19] for details.
In the present context, consistency means that the factors and in Eq. (6.2) approximate the corresponding factors in Eq. (6.3) ( and , respectively). These we now prove.
Under the unique optimal path assumption, we have
(6.6) 
for all and , and
(6.7) 
We begin by proving that and satisfy the first variational equations for and , respectively (see Eqs. (6.4) and (3.1)). Without loss of generality, set and , and write for a given . Then the first variational equation of is the boundary value problem
(6.8)  
(6.9)  
(6.10) 
and the first variational equation for is
Comments
There are no comments yet.