A 1000-fold Acceleration of Hidden Markov Model Fitting using Graphical Processing Units, with application to Nonvolcanic Tremor Classification

03/07/2020 ∙ by Marnus Stoltz, et al. ∙ 0

Hidden Markov models (HMMs) are general purpose models for time-series data widely used across the sciences because of their flexibility and elegance. However fitting HMMs can often be computationally demanding and time consuming, particularly when the the number of hidden states is large or the Markov chain itself is long. Here we introduce a new Graphical Processing Unit (GPU) based algorithm designed to fit long chain HMMs, applying our approach to an HMM for nonvolcanic tremor events developed by Wang et al.(2018). Even on a modest GPU, our implementation resulted in a 1000-fold increase in speed over the standard single processor algorithm, allowing a full Bayesian inference of uncertainty related to model parameters. Similar improvements would be expected for HMM models given large number of observations and moderate state spaces (<80 states with current hardware). We discuss the model, general GPU architecture and algorithms and report performance of the method on a tremor dataset from the Shikoku region, Japan.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 16

page 17

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.