A defensive marginal particle filtering method for data assimilation

The Particle filtering (PF) method is often used to estimate the states of dynamical systems in a Bayesian framework. A major limitation of the standard PF method is that the dimensionality of the state space increases as the time proceeds and eventually may cause degeneracy of the algorithm. A possible approach to alleviate the degeneracy issue is to compute the marginal posterior distribution at each time step, which leads to the so-called marginal PF method. In this work we propose a defensive marginal PF algorithm which constructs a sampling distribution in the marginal space by combining the standard PF and the Ensemble Kalman filtering (EnKF) methods. With numerical examples we demonstrate that the proposed method has competitive performance against many existing algorithms.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

07/04/2012

Toward Practical N2 Monte Carlo: the Marginal Particle Filter

Sequential Monte Carlo techniques are useful for state estimation in non...
06/19/2021

Stein particle filtering

We present a new particle filtering algorithm for nonlinear systems in t...
04/15/2022

Space-sequential particle filters for high-dimensional dynamical systems described by stochastic differential equations

We introduce a novel methodology for particle filtering in dynamical sys...
03/29/2021

Variational Rejection Particle Filtering

We present a variational inference (VI) framework that unifies and lever...
11/20/2015

Bayesian inference via rejection filtering

We provide a method for approximating Bayesian inference using rejection...
12/28/2018

Particle-based adaptive-lag online marginal smoothing in general state-space models

We present a novel algorithm, an adaptive-lag smoother, approximating ef...
01/15/2022

Wigner's quasidistribution and Dirac's kets

In every state of a quantum particle, Wigner's quasidistribution is the ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Assimilation of data into mathematical models is an essential task in many fields of science and engineering, ranging from meteorology ghil1991data to robotics thrun . Simply speaking, data assimilation is to estimate the optimal prediction based on both the output of the mathematical model, which is only an approximation of the real-world system, and the observations that are subject to measurement noise. Many commonly used data assimilation methods, most notably, the Kalman filter bishop2001introduction ; musoff2009fundamentals , are based on linear control theory and optimization, and their applications to nonlinear systems are usually challenging, which often require some linearization or approximation processes, e.g. the extended Kalman filter jazwinski or the ensemble Kalman filter evensen2009data ; sometimes they can even fail kuzn ; salman when strong nonlinearity is present.

On the other hand, the sequential Monte Carlo (SMC) method (see e.g., SMC ), also known as particle filtering (PF), can deal with problems with strongly nonlinear models, without any linearization or approximation. The basic idea of PF is the following. Suppose that the mathematical model is a nonlinear stochastic dynamical system, and our goal is to estimate the hidden states of the system from noisy partial observations

of the system. This can be done with the so-called Bayes filter (also known as the optimal filter), where the posterior probability density function (PDF) of the hidden states is estimated by the Bayes’ rule recursively 

doucet . As the posterior distribution usually does not admit an analytical form, the PF method approximates the posterior distribution with Monte Carlo sampling (hence its name SMC). That is, the PF method employs a number of independent random realizations called particles, sampled directly from the state space, to represent the posterior probability: namely, at each time , the method first generates particles and then updates the weight of each particle according to the observations . For further discussions on the PF method and its applications, we refer to SMC ; arulampalam2002tutorial ; doucet2009tutorial ; cappe2007overview and the references therein.

The PF method in its very basic form can be understood as to draw weighted samples according to the joint distribution

using the importance sampling (IS) technique. When is large, the method thus performs IS simulations in a high-dimensional state space, which may result in degeneracy of the particles (the IS weights becoming zero for all but one particle) doucet2009tutorial . On the other hand, often in practice one is only interested only in the marginal distribution , which implies that it is unnecessary to sample the high-dimensional joint distribution . Instead, one can perform IS only in the marginal space of , and based on this idea, a method called marginal particle filter (MPF) was proposed in klaas2005toward to alleviate the degeneracy issue. The method later has found applications in the estimation of filter derivative poyiadjis2005particle and robot localization martinez2007analysis . A key step in this kind of methods is to construct an IS distribution that can well approximate the marginal posterior at each time step. To this end, the MPF method in klaas2005toward

requires to do a kernel density estimation of the marginal posterior

from the weighted samples at each time

, which can be a rather challenging task when the number of particles is limited. The main purpose of this work is to provide an alternative approach to construct the IS distribution in the marginal space. In particular, we consider the special case where the observation operator is linear and the observation noise is Gaussian, and we propose a robust and efficiently method to compute the marginal IS distribution based on the ensemble Kalman filter (EnKF). A limitation of the EnKF based IS distribution is that (just like the EnKF method itself) it may result in poor performance when the posteriors are strongly non-Gaussian. To address the issue, we introduce a defensive scheme to construct an IS distribution by combining the EnKF and the standard PF methods, and with numerical examples, we demonstrate that the new method performs well even when the posteriors significantly deviate from a Gaussian distribution.

The rest of the paper is arranged as follows. In Section 2, we first introduce the basic setup of the filtering problem of dynamical models and then discuss the standard PF and EnKF methods for solving this type of problems. In Section 3, we present in detail our defensive MPF method. Numerical examples are provided in Section 4 to compare the performance of the proposed method and the existing ones, and finally Section 5 offers some concluding remarks.

2 The PF and the EnKF methods

We give a brief overview of the formulation of the PF and the EnKF methods in this section.

2.1 State-Space Models

We consider the filtering problem in a dynamic state-space form:

(2.1a)
(2.1b)

where

denotes the state vector at time

, and represents the observed data at time . In addition, and are the propagation and the observation noise respectively, and is the observation operator. In a filtering problem, the observation arrives sequentially in time and the goal is to estimate the true state , based on the prediction by (2.1a) and the measurement (2.1b). Finally we emphasize here that the dynamic model (2.1a) is Markovian, in that any future is independent of the past given the present :

(2.2)

which will be used in the derivation of the SMC method.

2.2 The particle filter

In general, we can formulate the filtering problem in a Bayesian inference framework: i.e., we try to infer state parameters

from data for some positive integer , and ideally we can compute the posterior distribution using the Bayes’ formula:

As is mentioned earlier, the posterior distribution usually does not admit an analytical form, and the sequential Monte Carlo method can be used to address the issue. Simply speaking, SMC allows to generate (weighted) samples, called particles, from the posterior distribution , which can be used to evaluate any quantities of interest associated with the posterior .

We now give a brief overview of the SMC method, and it is easier to start with a standard MC estimation. Suppose that there is a real-valued function and we are interested in the expectation

which can be estimated with a MC estimator:

where are samples drawn from . It should be clear that the MC estimator

is an unbiased estimator of

. In many practical problems, drawing samples from the target distribution can be a challenging task, and in this case, we can use the technique of importance sampling (IS). The IS method introduces an importance distribution and rewrites

with is the IS weight. It yields directly an IS estimator of :

where the samples are drawn from the importance distribution , and it can also be verified that the IS estimator is also an unbiased one for . The IS requires to generate samples from and to draw the joint sample from a joint distribution . Using the Markovian property in Eq. (2.2), we can write the posterior distribution in the form of

(2.3)

where is the normalization constant. Similarly, we can also assume that the importance distribution is also given in such a sequential form:

and the resulting IS weight function is

(2.4)

and

(2.5)

for , where is the incremental weight function:

(2.6)

We note that, in the formulation above, we do not have the knowledge of the normalization constant . In the implementation, however, we can simply set the normalization constant , and renormalize the weights computed. Namely, suppose that we draw a set of samples from the IS distribution , and we compute the weights of the samples using Eqs. (2.4)-(2.6) (and taking ), and then renormalize the weights as,

(2.7)

The SMC algorithm performs the procedure described above in a recursive manner:

  • At , sample , and compute using Eq (2.4); renormalize the weights: .

  • At :

    • prediction step: for each , draw ;

    • updating step: for each , compute the incremental function from Eq. (2.6); update the weights , and renormalize them as for each .

In the standard SMC method, one simply takes and

and as a result, , and the incremental weight function becomes

(2.8)

In the SMC algorithm, the variance of the importance weight

will increase over time, and thus at the time increases, the IS weights will become negligibly small for all but one sample, an issue known as particle degeneracy. To address the issue, a resampling step is often performed to obtain a set of equally weighted particles: namely one first draws

from uniform distribution

and then defines for .

2.3 The ensemble Kalman filter

As is mentioned in Section 1, we consider in this work a special case where the observation operator is linear and the observation noise is Gaussian. In this case, the marginal posterior distributions can be approximated by the EnKF method. The basic idea of the EnKF method is the following. Suppose that at time , the observation noise is and the prior can be approximated by a Gaussian distribution with mean and covariance . It follows that the posterior distribution is also Gaussian and its mean and covariance can be obtained analytically:

(2.9)

where

is the identity matrix and

(2.10)

is the so-called Kalman gain matrix. Moreover, when the prior is exactly Gaussian, this formulation becomes the standard Kalman filter.

In the EnKF method, one avoids computing the mean and the covariance directly in each step. Instead, both the prior and the posterior distributions are represented with a set of samples, known as an ensemble. Specifically, let be a set of samples drawn from the prior distribution , and we shall compute a Gaussian approximation of from the samples. Namely we estimate the mean and the covariance of from the samples:

(2.11)

and as is mentioned earlier, the prior distribution can be approximated by . It follows immediately that the posterior distribution is also Gaussian with mean and covariance given by Eq. (2.9). Moreover it can be verified that the samples

(2.12)

follow the distribution , provided that for all . That is, is the (posterior) ensemble at step . Given the ensemble at time , the EnKF algorithm performs the following two steps at time :

  • prediction step: for each , draw ;

  • updating step: for each , compute .

Finally we should note that, as the dynamical model is generally nonlinear, the EnKF method can only provide an approximation of the true posterior distribution, no matter how large the sample size is, which is certainly a significant limitation of the method.

3 The defensive marginal PF algorithm

As is discussed in Section 2.2, the standard SMC method aims to perform IS for the joint posterior distribution , where the dimensionality of the state space grows as increases. On the other hand, in many practical filtering problems, one is often only interested in the marginal posterior distribution at each step, , rather than the whole joint distribution. This then yields a simple idea: if we perform IS in the marginal space, the dimensionality of the problem is thus fixed and much smaller than that of the joint parameter space. For any time , suppose that there is a function defined on the marginal space: , and we are interested in the posterior expectation of :

We shall construct an IS distribution , and estimate as

(3.1)

where are drawn from and . The key issue here is certainly to find a good IS distribution , and ideally this IS distribution should approximate the marginal posterior . In klaas2005toward , a kernel-based IS distribution is suggested:

where each is obtained using a weighted Kernel density estimation (KDE) method. As a result the method requires to perform a weighed KDE procedure at each time step, which can be computationally intensive even with some fast KDE algorithms (e.g. the dual-tree methods). We also note that another special choice of the IS distribution is

(3.2)

and it should be clear that the associated weight becomes and the algorithm is essentially equivalent to the standard PF method.

In our setting, the EnKF method can naturally yield such an approximate marginal posterior distribution in a very efficient and effective manner. Loosely speaking, at a given time, we first compute an ensemble of the marginal posterior distribution using the EnKF scheme, estimate the associated Gaussian approximation from the ensemble, and use it as the IS distribution in the marginal SMC. Specifically, let be the posterior ensemble at time obtained with the EnKF formulation, we use the following procedure to compute the IS distribution:

  1. estimate the mean and covariance from the posterior ensemble :

    (3.3)

    let ;

  2. draw samples from , and compute the weight of each sample:

  3. estimate the mean and covariance of the weighted ensemble :

    (3.4)

    let .

Algorithm 1 Estimating the IS distribution from the ensemble

It is worth noting that, the EnKF ensemble does not exactly follow the posterior distribution, and so in the procedure above, instead of using , i.e., the Gaussian approximation estimated directly from the EnKF ensemble, we introduce an additional step, in which we first generate a weighted ensemble according to the true posterior, and then update the Gaussian approximation according to this weighted ensemble. By doing so we ensure that the Gaussian approximation is constructed with respect to the true posterior ensemble.

A well-known issue in the EnKF method is that, due to the nonlinearity of the model, the ensemble computed by the method may become increasingly inaccurate as the time increases; as a result the IS distribution obtained with the EnKF method may deviate significantly from the true marginal posterior, leading to poor performance or even failure of the IS estimator in Eq. (3.1). To address the issue, we use the idea of defensive importance sampling (DIS) hesterberg1995weighted . The basic idea behind DIS is quite straightforward: namely, to prevent the failure of the IS distribution computed with the EnKF method, one uses a mixture of the Gaussian approximation computed by EnKF and a safe or defensive distribution, which in our case is the standard PF distribution. Namely, our defensive IS distribution is:

(3.5)

where is the Gaussian distribution computed with the EnKF procedure described above, is the distribution given by Eq. (3.2), which, as discussed earlier, is equivalent to the standard PF, and is the weight of the EnKF component. An important issue here is how to compute the IS weight of each sample. It is easy to see that the weight function is

(3.6)

where

(3.7)

Computing is rather straightforward, but computing involves the evaluation of the integral:

(3.8)

In practice, this integral is approximated by

where are the samples generated in the previous step and is the associated weight of each sample  (namely, the weighted ensemble follows the distribution ). Finally e provide the complete defensive marginal PF (DMPF) algorithm in Algorithm 2.

1At :
2 Prediction: sample from ;
3 Updating: for ;
4 Compute using Algorithm 1 and particles ;
5 Draw particles from from Eq. (3.5) for , and compute the weights using Eq. (3.7), yielding ;
6 for t=1…T do
7     Prediction: for each , draw ;
8      Updating: for ;
9      Compute using Algorithm 1 and particles ;
10      Draw particles from given by Eq. (3.5), and compute the weights using Eq. (3.7), yielding ;
11      end for
Algorithm 2 The DMPF algorithm

4 Numerical examples

In this section we provide several numerical examples to demonstrate the performance of the proposed DMPF method. In all these examples, we also implement the standard PF and the EnKF method for comparison purposes.

4.1 Lorenz 63 system

Our first example is the classical Lorenz 63 system lorenz1963deterministic , an often used benchmark problem for testing data assimilation algorithms. Specifically the system is described by,

(4.1)

For simplicity, we consider a discrete-time version of the system with additive noise:

(4.2)

where is the discrete-time step size. Here we assume that the model noise and

are all i.i.d zero-mean Gaussian with standard deviation

. Moreover at each time the observed data is taken to be , and , where the observation noise , and are once again assumed to be i.i.d. zero-mean Gaussian with standard deviation .

In our numerical tests we take the parameters to be , , , , and the initial condition to be

The noise standard deviations are and . We generated a true state and the associated data points from the model, which are shown in Fig. 1.

As is mentioned at the beginning of Section 4, we estimate the states from the simulated data with the proposed DMPF method, as well as the standard PF and the EnKF methods. We note here that in all the three examples we use in the DMPF method. With each method, we generate particles. We also perform a PF simulation with particles and regard the results as the true posterior. We first compare the posterior mean computed with the different methods in Fig. 2, and note here that all these results for posterior mean agree very well with each other and so all the plots in Fig. 2 are indistinguishable. We then compare the posterior variances computed by all the methods and show the results in Fig. 3. One can see from the figure that, for the variances, the results of both the EnKF and the DMPF methods agree reasonably with the true posterior variance, while those of the standard PF (with particles) are subject to much stronger fluctuations than the other ones. These results suggest that for problems where the posterior distribution can be well approximated by the EnKF approximation, our DMPF method which utilises the EnKF approximation can substantially improve of the performances over the standard PF.

Figure 1: The true state (dashed lines) and the simulated observations (dots) of the Lorenz 63 model.

Figure 2: The posterior means computed by the different methods for the Lorenz 63 model.

Figure 3: The posterior variances computed by the different methods for the Lorenz 63 model.

4.2 Bernoulli model

Our second example is the the Bernoulli equation,

(4.3)

which admits an analytical solution,

(4.4)

where . Here for simplicity we use the analytical solution to construct the discrete-time model:

(4.5)

where and are the model and observation noise respectively. In this example we set , and the total number of steps to be . Moreover, we assume that both and follow zero-mean Gaussian distributions with standard deviation and . This is an often used example which admits strongly non-Gaussian posteriors apte2007sampling ; stordal2011bridging .

In this example we also use a simulated true state and generate noisy data from it, where both of them are shown in Fig. 4. We first estimate the states using the PF method with particles which we use to represent the true posteriors. We then perform the DMPF, standard PF and the EnKF methods to obtain the posterior statistics, with particles for each method. We compare the posterior means and variances computed by all the methods in Fig. 5. One can see from the plots that, as both the PF and the DMPF methods yield results in a good agreement with the truth, those of the EnKF significantly depart as the time proceeds. The poor performance of the EnKF method in this example can be understood by examining the posterior distributions: in Fig. 6, we plot the posterior distributions computed by all methods at steps and respectively (the distributions of the PF and DMPF methods are obtained by performing a kernel density estimation with the particles). As one can see here, while at the EnKF approximation remains rather close to the true posterior distribution, it significantly deviates from the the true posterior at because of the cumulation of the non-Gaussianlaity as time increases. It is important to note that our DMPF method can nevertheless produce results agreeing with the true posteriors, even though the EnKF method fails.

Figure 4: The true state (dashed lines) and the simulated observations (dots) of the Bernoulli model.

Figure 5: Left: the posterior means computed by the different methods. Right: the posterior variances computed by the different methods.

Figure 6: Left: the posterior distributions at . Right: the posterior distributions at . In both plots, the solid lines are the true posteriors (results of PF with 50,000 particles) and the dashed ones are the EnKF approximations.

4.3 Localization of a car-like robot

Finally we consider a real-world problem, in which the position of a remotely controlled car-like robot is inferred from the on-board GPS data. The kinematic model of the car-like robot is described by the following non-linear system klaas2005toward :

(4.6)

where are the position coordinates of the vehicle, is its length, is the steering orientation angle, is the front wheel orientation angle, and and are the linear and angular velocities respectively. The schematic illustration of the model is shown in Fig. 7. In this problem, we assume the linear and the angular velocities and are controlled as follows:

(4.7)

The discreet-time version of the model is described by:

(4.8)

where M stands for the standard fourth-order Runge-Kutta solution of Eq. (4.6) with . In Eq. (4.8), , , and are the errors in the state process. In particular all these errors are taken to be zero mean Gaussian with standard deviation .

On the other hand, the GPS makes measurements of the pose of the vehicle, and specifically these measurements are taken to be

where , and are the observation noise following . We shall estimate , , and from these measurements for a time period that is discretised into steps. The true states of the system are randomly generated and the measurement data are simulated from the generated true states using the prescribed model; both the true states and the associated measurements are plotted in Fig. 8. We emphasise here that no observations are made on the front-wheel angle and so only the true states of it are plotted in Fig. 8. Once again we apply the three methods to estimate the states of the four parameters in this problem, and with each method we generate particles. We then compare the results of the three methods with the true posterior statistics, which are obtained by using the standard PF with particles. We show the comparison of the results in Fig. 9 (posterior mean) and Fig. 10 (posterior variance). From Fig. 9 we can see that all the posterior means computed by all the methods agree very well with the true posterior mean for parameters , and ; for parameter on which we do not have direct observation, the results of the EnKF method deviate significantly from those of the others after around 70 steps. For the posterior variance shown in Fig. 10, we observe that for all four parameters, the results of the standard PF are subject to much fluctuations than those of the DMPF and the EnKF methods ; moreover similar to the posterior mean, the posterior variance estimated by the EnKF method also deviate from the true result after about 70 steps. Thus for this problem, the proposed DMPF method has the best overall performance in all the three methods.

Figure 7: The schematic illustration of the car-like robot model.

Figure 8: The true state (dashed lines) and the simulated observations (dots) of the car-like robot model.

Figure 9: Left: the true posterior distribution. Right: the KL distance between and , plotted against the number of iterations.

Figure 10: Left: the true posterior distribution. Right: the KL distance between and , plotted against the number of iterations.

5 Conclusions

In summary, we have presented a marginal particle filtering method that samples the posterior distribution in the marginal state space. In particular, we propose a defensive scheme to construct an IS distribution in the marginal space by combining the PF and the EnKF methods, which ensures that the algorithm performs well even when the posterior is strongly non-Gaussian. With numerical examples, we demonstrate that the proposed method has competitive performance against the PF and the EnKF methods. We believe that the method can be useful in a wide range of practical data assimilation problems where the other popular existing methods may not perform well.

The proposed method can be improved in several aspects. First, in this work we have mainly considered problems where the marginal state space is of rather low dimensions. On the other hand, for problems of high dimensions, it becomes very challenging to accurately estimate the IS weights in each step. This issue needs to be addressed and so the MPF type of methods can apply to high dimensional problems. Second, as has been discussed in klaas2005toward , computing the IS weights in each time step is of complexity where is the number of particles, and as a result the method become prohibitively expensive for problems requiring a large number of particles. It has been suggested in klaas2005toward that some approximation techniques such as the fast multipole method greengard1987fast can be used to reduce the computational cost, but further improvement of the efficiency is still needed to make the method useful in large scale problems. Finally in the present form of our method, the EnKF approximation is taken to be a Gaussian distribution, but this restriction can be relaxed by using, for example, mixtures to represent the marginal posteriors bengtsson2003toward ; stordal2011bridging ; chen2000mixture . We plan to study these issues and improve the MPF method in the future.

References

  • (1) Apte, A., Hairer, M., Stuart, A., Voss, J.: Sampling the posterior: An approach to non-gaussian data assimilation. Physica D: Nonlinear Phenomena 230(1-2), 50–64 (2007)
  • (2) Arnaud Doucet Nando de Freitas, N.G. (ed.): Sequential Monte Carlo Methods in Practice. Springer Science & Business Media (2001)
  • (3) Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on signal processing 50(2), 174–188 (2002)
  • (4) Bengtsson, T., Snyder, C., Nychka, D.: Toward a nonlinear ensemble filter for high-dimensional systems. Journal of Geophysical Research: Atmospheres 108(D24) (2003)
  • (5) Bishop, G., Welch, G., et al.: An introduction to the kalman filter. Proc of SIGGRAPH, Course 8(27599-3175), 59 (2001)
  • (6) Cappé, O., Godsill, S.J., Moulines, E.: An overview of existing methods and recent advances in sequential monte carlo. Proceedings of the IEEE 95(5), 899–924 (2007)
  • (7) Chen, R., Liu, J.S.: Mixture kalman filters. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 62(3), 493–508 (2000)
  • (8) Doucet, A., Godsill, S., Andrieu, C.: On sequential monte carlo sampling methods for bayesian filtering. Statistics and computing 10(3), 197–208 (2000)
  • (9) Doucet, A., Johansen, A.M.: A tutorial on particle filtering and smoothing: Fifteen years later.

    Handbook of nonlinear filtering

    12(656-704), 3 (2009)
  • (10) Evensen, G.: Data assimilation: the ensemble Kalman filter. Springer Science & Business Media (2009)
  • (11) Ghil, M., Malanotte-Rizzoli, P.: Data assimilation in meteorology and oceanography. In: Advances in geophysics, vol. 33, pp. 141–266. Elsevier (1991)
  • (12) Greengard, L., Rokhlin, V.: A fast algorithm for particle simulations. Journal of computational physics 73(2), 325–348 (1987)
  • (13) Hesterberg, T.: Weighted average importance sampling and defensive mixture distributions. Technometrics 37(2), 185–194 (1995)
  • (14) Jazwinski, A.H.: Stochastic processes and filtering theory. Courier Corporation (2007)
  • (15) Klaas, M., de Freitas, N., Doucet, A.: Toward practical n2 monte carlo: the marginal particle filter.

    In: Uncertainty in Artificial Intelligence (UAI). AUAI Press (2005)

  • (16) Kuznetsov, L., Ide, K., Jones, C.: A method for assimilation of lagrangian data. Monthly Weather Review 131(10), 2247–2260 (2003)
  • (17) Lorenz, E.N.: Deterministic nonperiodic flow. Journal of the atmospheric sciences 20(2), 130–141 (1963)
  • (18) Martinez-Cantin, R., de Freitas, N., Castellanos, J.A.: Analysis of particle methods for simultaneous robot localization and mapping and a new algorithm: Marginal-slam. In: ICRA, pp. 2415–2420 (2007)
  • (19) Musoff, H., Zarchan, P.: Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics (2009)
  • (20) Poyiadjis, G., Doucet, A., Singh, S.S.: Particle methods for optimal filter derivative: Application to parameter estimation. In: Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP’05). IEEE International Conference on, vol. 5, pp. v–925. IEEE (2005)
  • (21) Salman, H., Kuznetsov, L., Jones, C., Ide, K.: A method for assimilating lagrangian data into a shallow-water-equation ocean model. Monthly Weather Review 134(4), 1081–1101 (2006)
  • (22) Stordal, A.S., Karlsen, H.A., Nævdal, G., Skaug, H.J., Vallès, B.: Bridging the ensemble kalman filter and particle filters: the adaptive gaussian mixture filter. Computational Geosciences 15(2), 293–305 (2011)
  • (23) Thrun, S., Burgard, W., Fox, D.: Probabilistic robotics. MIT press (2005)