Computational issues in parameter estimation for hidden Markov models with Template Model Builder

by   Timothée Bacri, et al.

A popular way to estimate the parameters of a hidden Markov model (HMM) is direct numerical maximization (DNM) of the (log-)likelihood function. The advantages of employing the TMB (Kristensen et al., 2016) framework in R for this purpose were illustrated recently Bacri et al. (2022). In this paper, we present extensions of these results in two directions. First, we present a practical way to obtain uncertainty estimates in form of confidence intervals (CIs) for the so-called smoothing probabilities at moderate computational and programming effort via TMB. Our approach thus permits to avoid computer-intensive bootstrap methods. By means of several examples, we illustrate patterns present for the derived CIs. Secondly, we investigate the performance of popular optimizers available in R when estimating HMMs via DNM. Hereby, our focus lies on the potential benefits of employing TMB. Investigated criteria via a number of simulation studies are convergence speed, accuracy, and the impact of (poor) initial values. Our findings suggest that all optimizers considered benefit in terms of speed from using the gradient supplied by TMB. When supplying both gradient and Hessian from TMB, the number of iterations reduces, suggesting a more efficient convergence to the maximum of the log-likelihood. Last, we briefly point out potential advantages of a hybrid approach.


page 1

page 2

page 3

page 4


A gentle tutorial on accelerated parameter and confidence interval estimation for hidden Markov models using Template Model Builder

A very common way to estimate the parameters of a hidden Markov model (H...

Particle-based, online estimation of tangent filters with application to parameter estimation in nonlinear state-space models

This paper presents a novel algorithm for efficient online estimation of...

Rejoinder: "Gene Hunting with Hidden Markov Model Knockoffs"

In this paper we deepen and enlarge the reflection on the possible advan...

On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks

Capturing aleatoric uncertainty is a critical part of many machine learn...

Efficient and Accurate Learning of Mixtures of Plackett-Luce Models

Mixture models of Plackett-Luce (PL) – one of the most fundamental ranki...

Hidden Markov Models with Momentum

Momentum is a popular technique for improving convergence rates during g...

Please sign up or login with your details

Forgot password? Click here to reset