
Sequential Monte Carlo Methods for System Identification
One of the key challenges in identifying nonlinear and possibly nonGaus...
read it

Bayesian optimisation for fast approximate inference in statespace models with intractable likelihoods
We consider the problem of approximate Bayesian parameter inference in n...
read it

Online Bayesian parameter estimation in general nonlinear statespace models: A tutorial and new results
Online estimation plays an important role in process control and monito...
read it

Learning Nonlinear State Space Models with Hamiltonian Sequential Monte Carlo Sampler
State space models (SSM) have been widely applied for the analysis and v...
read it

Online Approximate Bayesian learning
We introduce in this work a new method for online approximate Bayesian l...
read it

Correlated pseudomarginal schemes for timediscretised stochastic kinetic models
Performing fully Bayesian inference for the reaction rate constants gove...
read it

Delayed Sampling and Automatic RaoBlackwellization of Probabilistic Programs
We introduce a dynamic mechanism for the solution of analyticallytracta...
read it
Learning of statespace models with highly informative observations: a tempered Sequential Monte Carlo solution
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear statespace models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the statespace model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p(dataparameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC^2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging WienerHammerstein benchmark problem.
READ FULL TEXT
Comments
There are no comments yet.