Accelerate iterated filtering

02/23/2018
by   Dao Nguyen, et al.
0

In simulation-based inferences for partially observed Markov process models (POMP), the by-product of the Monte Carlo filtering is an approximation of the log likelihood function. Recently, iterated filtering [14, 13] has originally been introduced and it has been shown that the gradient of the log likelihood can also be approximated. Consequently, different stochastic optimization algorithm can be applied to estimate the parameters of the underlying models. As accelerated gradient is an efficient approach in the optimization literature, we show that we can accelerate iterated filtering in the same manner and inherit that high convergence rate while relaxing the restricted conditions of unbiased gradient approximation. We show that this novel algorithm can be applied to both convex and non-convex log likelihood functions. In addition, this approach has substantially outperformed most of other previous approaches in a toy example and in a challenging scientific problem of modeling infectious diseases.

READ FULL TEXT
research
03/10/2020

Unbiased Estimation of the Gradient of the Log-Likelihood in Inverse Problems

We consider the problem of estimating a parameter associated to a Bayesi...
research
08/31/2019

A Note on New Bernstein-type Inequalities for the Log-likelihood Function of Bernoulli Variables

We prove a new Bernstein-type inequality for the log-likelihood function...
research
11/04/2014

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set dive...
research
12/15/2020

Maximum log_q Likelihood Estimation for Parameters of Weibull Distribution and Properties: Monte Carlo Simulation

The maximum log_q likelihood estimation method is a generalization of th...
research
09/21/2017

Learning RBM with a DC programming Approach

By exploiting the property that the RBM log-likelihood function is the d...
research
10/25/2018

Efficient Learning of Restricted Boltzmann Machines Using Covariance estimates

Learning of RBMs using standard algorithms such as CD(k) involves gradie...
research
04/27/2021

Discriminative Bayesian Filtering Lends Momentum to the Stochastic Newton Method for Minimizing Log-Convex Functions

To minimize the average of a set of log-convex functions, the stochastic...

Please sign up or login with your details

Forgot password? Click here to reset