Stochastic Gradient MCMC Methods for Hidden Markov Models

06/14/2017 ∙ by Yi-an Ma, et al. ∙ 0

Stochastic gradient MCMC (SG-MCMC) algorithms have proven useful in scaling Bayesian inference to large datasets under an assumption of i.i.d data. We instead develop an SG-MCMC algorithm to learn the parameters of hidden Markov models (HMMs) for time-dependent data. There are two challenges to applying SG-MCMC in this setting: The latent discrete states, and needing to break dependencies when considering minibatches. We consider a marginal likelihood representation of the HMM and propose an algorithm that harnesses the inherent memory decay of the process. We demonstrate the effectiveness of our algorithm on synthetic experiments and an ion channel recording data, with runtimes significantly outperforming batch MCMC.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Stochastic gradient based algorithms have proven crucial in numerous areas for scaling inference algorithms to large datasets. The key idea is to employ noisy estimates of the gradient based on minibatches of data, avoiding a costly gradient computation using the full dataset

(Robbins & Monro, 1951)

. Assuming the data are i.i.d. and the minibatches are properly scaled, the stochastic gradient is an unbiased estimate of the true gradient. In the context of Bayesian inference, such approaches have proven useful in scaling variational inference 

(Hoffman et al., 2013; Bryant & Sudderth, 2012; Broderick et al., 2013; Foti et al., 2014)

and Markov chain Monte Carlo (MCMC) 

(Welling & Teh, 2011; Patterson & Teh, 2013; Chen et al., 2014; Ding et al., 2014; Shang et al., 2015). For the latter, a primary focus has been on the influence of the stochastic gradient noise on the MCMC iterates; in contrast to many optimization-based procedures, it is non-trivial to show that the underlying (stochastic) dynamics maintain the correct stationary distribution in the presence of such noise. Significant headway has been made in developing such correct SG-MCMC procedures. These algorithms have shown great practical benefits and have gained significant traction.

A separate challenge, however, is the important and often overlooked question of whether such stochastic gradient techniques can be applied to massive amounts of sequential or otherwise non-i.i.d. data. In such cases, crucial dependencies must be broken to form the necessary minibatches. This question received some attention in the stochastic variational inference (SVI) algorithm of Foti et al. (2014) for hidden Markov models (HMMs). In this work, we also focus in on HMMs as a simple example of a sequential data model, but turn our attention to SG-MCMC algorithms.

There are many existing algorithms for inferring the model parameters of an HMM including Monte Carlo methods (Scott, 2002)

, expectation-maximization (EM) 

(Bishop, 2006), and variational algorithms (Beale, 2003). All of these ideas operate by iterating between a local update for the latent states, followed by a global update of the model parameters. The local update is usually performed using the forward-backward algorithm that allows computation of any marginal, or pair-wise marginal, in time linear in the length of the sequence.

In the variational context, recent work has focused on scaling these local-global inference schemes to settings with a large number of replicates of short sequences (Johnson & Willsky, 2014; Hughes et al., 2015). These methods utilize the fact that independent replicates of the observation sequence can be used to compute unbiased gradient estimates (Johnson & Willsky, 2014), and can be used to incrementally update sufficient statistics (Hughes et al., 2015). In contrast, the SVI-HMM algorithm of Foti et al. (2014)

examines how to deal with extremely long observation sequences. The algorithm heuristically breaks the dependence between observations and performs local updates on short subsequences of observations using a limited forward-backward algorithm. These existing methods suffer from a number of drawbacks. The variational approaches must use an approximate posterior distribution for both the state- and model-parameters, which may not be representative of the true distributions. The methods are also limited to conjugate prior distributions over the parameters, which can severely limit the expressiveness of the model. Finally, all of the methods discussed thus far are susceptible to the widely known problem of underestimating posterior correlations biasing fully Bayesian analyses.

Unfortunately, attempting to naively use subchains as in Foti et al. (2014) within SG-MCMC approaches is fraught with difficulty: The local-global structure of SVI-HMM does not lend itself to deriving provably correct SG-MCMC algorithms, stemming from two main challenges.

The first challenge is that SG-MCMC methods sample continuous-valued parameter representations, whereas the HMM learning objective is typically specified in terms of the discrete-valued state sequence (local variables). To address this challenge, we consider an alternative approach to performing parameter inference for HMMs: We work directly with the marginal likelihood of the observation. We form stochastic gradients by only evaluating terms of the full gradient that depend on a small subsequence.

The second challenge is handling the temporal dependencies, specifically: i) each subsequence-specific term in the stochastic gradient still requires a forward-backward pass on the rest of the sequence, and ii) proximal subsequences are mutually correlated. We address both of these issues by capitalizing on the well-known memory decay of the Markovian structure underlying the data generating process. Specifically, we approximate the full forward-backward passes with message passing only on short buffers around the considered subsequences of observations. We further restrict subsequences to be sufficiently far from one another to ensure that computations with them are uncorrelated. We provide a theoretically justified approach to estimating this buffer length and between-subsequence gap, allowing us to prove the validity of the resulting SG-MCMC algorithm. In particular, the same theoretical guarantees are provided as in the i.i.d. setting.

Buffering to perform limited message passing in HMMs was also applied in SVI-HMM Foti et al. (2014); however, the buffering there was part of a latent state update. In particular, SVI-HMM iterates between buffered message passing for local updates and stochastic gradients for global updates. We, in contrast, consider stochastic gradients of a marginal likelihood representation and utilize buffering directly within this stochastic gradient calculation.

We evaluate the efficacy of our buffered SG-MCMC method for HMMs on two synthetic examples with very different dynamics. We compare against an unbuffered SG-MCMC approach as well as against treating the data as i.i.d. Finally, we show the computational gains of our SG-MCMC algorithm over batch MCMC by segmenting an ion channel dataset where a 1,000X speedup was observed. Collectively, our contributions make a sizable step towards general purpose SG-MCMC algorithms for sequential data.

2 Background

2.1 Hidden Markov Models

Hidden Markov models (HMMs) are a class of discrete-time doubly stochastic processes consisting of a i) latent discrete-valued state sequence generated by a Markov chain and (ii) corresponding observations generated from distributions determined by the latent states

. Specifically, the joint distribution of

and , factorizes as

(1)

where is the Markov transition matrix such that , are the emission parameters, and is the initial state distribution. We denote the parameters of interest as and do not focus on performing inference on .

Traditionally, EM, variational inference, or MCMC are used to perform inference over  (Scott, 2002; Beale, 2003). These algorithms rely on the well-known forward-backward algorithm to compute the marginal, , and pairwise-marginal, , distributions. The algorithm works by recursively computing a sequence of forward messages and backwards messages which can then be used to compute the necessary marginals (Beale, 2003). These marginals are then used to update or sample from the distribution of the model parameters.

These past algorithms have found widespread use in statistics and machine learning. However, as discussed in Sec. 

1, an alternative formulation of the HMM can provide greater utility in developing an SG-MCMC approach. Marginalizing over , we obtain the marginal likelihood:

(2)

where is a diagonal matrix with ;

is a row vector of

ones; and . The resulting posterior distribution of given is then:

(3)

Working with the marginal likelihood and posterior alleviates the need to compute the marginals and pairwise marginals of . As such, only the forward pass of the forward-backward algorithm is performed. Indeed, performing the matrix multiplications in Eq. (2) from right to left corresponds to computing the normalizing constants of the forward messages. Performing the matrix multiplies from left to right corresponds to unnormalized messages in belief propagation, (cf. Fox, 2009). Perhaps most importantly for the development of our SG-MCMC algorithm, the marginal likelihood does not involve alternately updating the local state variables, , and the global model parameters . Instead, we need only explore a continuous space which will allow us to leverage gradient information to develop a computationally and statistically efficient algorithm. The major impediment to directly using Eq. (2) for SG-MCMC is that it is unclear how to form a stochastic gradient based on a subsequence to avoid the computational burden of gradient computations in large settings.

2.2 Stochastic Gradient MCMC for i.i.d. Data

One approach for devising MCMC algorithms is to utilize continuous dynamics to explore a potential function for target distribution ; for Bayesian inference, we take , i.e., the negative log posterior. Then, samples of a continuous valued parameter, , can be drawn as (Ma et al., 2015, 2016)

(4)

where , is a positive-definite matrix and

a skew-symmetric matrix.

Ma et al. (2015) proved that in the limit with ergodicity, the iterates will be drawn from .

For i.i.d. data, . For independently sampled data subsets, , a noisy unbiased estimate of the potential function is given by:

(5)

As such, a gradient computed based on —called a stochastic gradient—is a noisy, but unbiased estimator of the full-data gradient. The key question is whether the noise injected by the stochastic gradient adversely affects the stationary distribution of the modified dynamics (using in place of

). One way to analyze the impact of the stochastic gradient is to make use of the central limit theorem and assume

(6)

Simply using in place of in Eq. (4) results in an additional noise term . Assuming we have an estimate

of the variance of this additional noise satisfying

(i.e., positive semidefinite), then we can attempt to account for the stochastic gradient noise by simulating

(7)

This is the SG-MCMC algorithm for i.i.d. data proposed by Ma et al. (2015, 2016). See Alg. 2.

For this SG-MCMC, there are sources of error introduced via (i) discretizing the continuous stochastic dynamics and (ii) estimation of the stochastic gradient noise covariance. Although the algorithm is provably correct as , in practice one uses a small, finite stepsize for greater efficiency. In such cases, bias is introduced. This bias-variance tradeoff was recently studied in (Vollmer et al., 2016). Higher order numerical schemes (Chen et al., 2015; Leimkuhler & Shang, 2016) and a moving window estimation of can further reduce this bias (Shang et al., 2015).

  initialize
  for  do
     for  do
        
     end for
     sample
     
  end for
Algorithm 1 SG-MCMC

3 Stochastic Gradient MCMC for HMMs

In order to apply SG-MCMC methods to HMMs we must be able to efficiently estimate the gradient of the potential, . The approach we take consists of three steps (see Fig. 1). First, we marginalize out the discrete state sequence and use the marginal likelihood of the data. Next, we derive an expression for the gradient of the marginal likelihood that factorizes over disjoint subsequences. Finally, we compute an unbiased noisy estimate of the gradient by randomly sampling subsequences and show that using this estimate results in an SG-MCMC algorithm that admits the desired stationary distribution under the same conditions as in the i.i.d. case (see Sec. 2.2).

One could have imagined an alternative approach—as in SVI-HMM—of first sampling subsequences; we could then compute an approximation of the marginal likelihood on this subsequence and treat its gradient as our stochastic gradient. However, without the marginal likelihood information in the first place, it is not obvious how subsequences correlate with each other and consequently how to control the error resulting from subsampling.

3.1 Gradient of Marginal Likelihood Representation

Recall that the posterior under an HMM is given by Eq. (3) and that the potential function . As will prove useful in our SG-MCMC algorithm, we rewrite the posterior in terms of a subsequence with half-width centered at time . The overall subsequence length is . Defining

(8)

we can rewrite Eq. (3) as

(9)

Here, is the likelihood of the observations after given the value of the latent state at , and is the predictive distribution of the latent state at given the observations before . Note, we do not actually need to instantiate the latent state variables and as and can be computed (in theory) via the forward-backward algorithm (Rabiner, 1989; Scott, 2002).

Let be a set of non-overlapping subsequences that cover . The gradient of Eq. (23) can be written as

(10)

where the equality follows from the product rule (see the Supplement for complete derivation). Importantly, note that the gradient involves a sum over terms corresponding to all non-overlapping subsequences of length .

We could imagine using from Eq. (A) in the update rule of Eq. (4) to generate sample values of . However, Eq. (A) is extremely computationally intensive for two reasons. First, calculating and involves the whole sequence of length . Second, one must compute , , and for each in the sum; this involves terms, thus requiring computation time to compute the gradient.

Figure 1: Diagram of subsequences, buffers, and subsequence sampling from full observation sequence. Left: The SVI method of Foti et al. (2014) approximates stochastic gradients using subchains of length using the forward-backward algorithm performed on both the subchains and the associated buffer chains of length . Right: Our propsoed SG-MCMC method uses a similar subsampling approach, however, i) the latent chain is never instantiated and ii) a minimum gap between consecutive subchains, , is used to ensure nearly uncorrelated subsequences. The thick black lines through the observables represent all pairwise correlations between observations due to marginalization of . Correlation decays with distance enabling the segmentation of the of the chain into subsequences.

3.2 Stochastic Gradient Calculation

In place of in Eq. (A), we can define a stochastic gradient based on a single subsequence, :

(11)

To control the variance of this estimator, we sample a collection of subsequences—referred to as a minibatch where denotes the number of subchains in the minibatch. The s are drawn randomly from ; details of the full sampling scheme for is provided in the Supplement. We then use the following estimator of the full gradient:

(12)

If we sample from the set of all possible length-

subsequences with probability

, then  (Gopalan et al., 2012).

Unfortunately, even this stochastic estimate is prohibitively expensive to compute since the and terms require touching nearly all of the observations. We instead consider approximating these quantities.

Approximating messages via buffering

Inspired by recent work on stochastic variational inference for HMMs (Foti et al., 2014), we introduce a buffer of length on either end of each subsequence where and . See Fig. 1. For an irreducible and aperiodic Markov chain, a sufficiently long buffer will render the observations within and those outside the buffers approximately independent. This lets us approximate the boundary terms in Eq. (11) as

(13)

Notice that we plug in and as the initial conditions for the buffers in Eq. (13). Though this introduces errors into the computations of and , these errors will be nearly forgotten for observations in the subchain of interest due to the mixing of the underlying Markov chain. We rewrite the terms in Eq. (A) as

(14)

We note that Eq. (12) is computed in time using buffers. When this results in significant computational speedups over batch inference algorithms.

A critical question that needs to be answered is how long should the buffers be? Though previous theory exists to quantify the buffer length, the resulting lengths are often longer than the entire sequence (LeGland & Mevel, 1997). A heuristic solution was suggested (Foti et al., 2014), but theoretical justification was lacking. We propose estimating the buffer length using the Lyapunov exponent of the random dynamical system specified by and . The Lyapunov exponent measures the evolution of the distance between vectors after applying the operator  (Arnold, 1998)

. By generalizing the Perron–Frobenius theorem, all of the eigenvalues of the operator

are less than , which implies that  (Seneta, 2006). The greater the absolute value of , the faster the errors at the boundaries of the buffers decay, and the shorter the buffers need to be. Given an estimate of , we set the buffer length as where are error tolerances. The method of calculating is described in the Supplement. Forthcoming work in the applied probability literature formalizes the validity of this approach (Ye et al., ).

Approximately independent subsequences

We estimate with minibatches composed of subsequences. When the subsequences in a minibatch overlap or are very close to one another, the statistical efficiency of is diminished, requiring more subsequences to obtain accurate estimates. If we assume that the Markov chain of the latent state sequence is in equilibrium — a realistic assumption if is huge — then we can leverage the memory decay of the Markov chain to encourage independent subsequences for use in the gradient estimator.

The mixing time of a Markov chain, denoted , is the number of steps needed until the chain is “close” to its stationary distribution (Seneta, 2006). This implies that for , the corresponding and are approximately independent. Consequently, if we choose the buffer length s.t. , then or implies that is approximately independent of , , and . Therefore, we can increase the statistical efficiency of by sequentially sampling the s such that they are at least time indices apart (see the Supplement for details). We estimate the mixing time where is the second largest eigenvalue of the current transition parameter iterate, .

When sampling subsequences adhering to the mixing-time-dependent gap, each term in Eq. (12) is rendered approximately independent. Following standard practice for SG-MCMC algoriths, we appeal to the central limit theorem obtaining the following expression for the asymptotic distribution of as:

(15)

where is the stochastic gradient noise variance. This will prove crucial for our analysis in Sec. 3.4.

3.3 Incorporating Geometric Information

Eq. (4) serves as a general purpose algorithm that theoretically attains the correct stationary distribution for any and matrices when the step size approaches zero. But in practice, we need to take into account numerical stability during numerical integrals. For example, when we are sampling from the probability simplex, previous work has shown that taking the curvature of the parameter space into account is important (Welling & Teh, 2011; Ma et al., 2015). Since our transition parameters live on the simplex, we likewise incorporate the geometry of the parameter space by constructing a stochastic-gradient Riemannian MCMC (SG-RMCMC) algorithm.

SG-RLD for transition parameters

In order to sample the transition matrix we note that the columns of are constrained to lie on the probability simplex. To address these constraints, we use the expanded mean parametrization: , similar to what Patterson & Teh (2013) used for topic modeling. Evaluating in Eq. (A) for , using Eq. (8) yields:

(16)

Here, and are computed on the left and right buffers, respectively, according to Eq. (13). The terms inside the sum in Eq. (16) are analogous to the pairwise marginals of the latent state in traditional HMM inference algorithms. A detailed derivation of this gradient can be found in the Supplement.

By leveraging the flexible SG-MCMC update rule of Eq. (7), we remove the dependency on from the denominator of Eq. (16) by selecting and . This yields the following update:

(17)

where

denotes all other model parameters. We note that this pre-conditioned gradient takes advantage of the local geometry of the parameter space by pre-multiplying by a metric tensor that arises from Eq. (

7).

SG-RLD for emission parameters

Similarly to the transition parameters, we sample the emission parameters , by evaluating in Eq. (A) for Using Eq. (8). This results in the gradient:

(18)

Again, and are computed on the left and right buffers, respectively, according to Eq. (13). Similarly to the transition parameters, we account for the geometry of the parameter space by specifying an appropriate and in Eq. (7) which in general depends on the form of . For exponential family emission distributions we recommend taking to be the inverse of the Fisher information matrix (Amari, 1998).

As a concrete example, we consider a Gaussian emission distribution. Define, , then we have:

(19)
(20)

We plug these values into the SG-MCMC update of Eq. (7) using to account for the geometry of the parameter space and . This leads to the update equations:

(21)
(22)

It is possible when using Eq. (22) to obtain a that is not positive definite. In this case we reject the update and set .

3.4 Analysis of SG-MCMC for HMMs

Our proposed SG-MCMC scheme for HMMs introduces error in three ways. The first two carry over from the standard i.i.d. setting: (i) discretizing the continuous stochastic dynamics and (ii) estimating the stochastic gradient noise covariance, as discussed in Sec. 2.2.

The third is via our approximations and . This error is already incorporated in Eq. (15), and vanishes with in Eq. (7). Thus, applying the results from Ma et al. (2015, 2016), we can show that the proposed SG-MCMC for HMMs asymptotically has the right stationary distribution under the same conditions as in the i.i.d. case. However, in practice we use a fixed , and as we show in Sec. 4.1, performing sufficient buffering via our Lyapunov exponent approach is critical.

In summary, our SG-MCMC algorithm enables MCMC-based inference in HMMs for massive sequences of data. In particular, we only require computations on collections of small subsequences and attain the desired stationary distribution by mitigating the errors incurred by these approximations. Finally, we have shown how to incorporate geometric information about the parameter space in order to increase the numerical robustness of the algorithm.

  initialize and
  for  do
     Periodically estimate the buffer length and the minimum subchain gap according to Sec. 3.
     Sample subchains of length from .
     for  do
        Update according to Eq. (16) and (17)
     end for
     Calculate .
     Set
     for  do
        update according to Eqs. (3.3)– (22)
     end for
     Set .
  end for
Algorithm 2 SG-MCMC for HMM

4 Experiments

We evaluate the performance of our proposed SG-RLD algorithm for HMMs on both synthetic and real data. First, we demonstrate the effectiveness of incorporating the buffers on two synthetic data sets that exhibit very different dynamics. Next we apply our SG-RLD for HMMs to a non-conjugate model of synthetic data. Finally, we apply SG-RLD to a large ion channel recording data set and compare to batch MCMC.

4.1 Evaluating Buffer Effectiveness

We first design two synthetic experiments in order to illustrate the effectiveness of our adaptive buffer scheme. We compare SG-RLD with buffering, without buffering, and treating the data as i.i.d. as a baseline. We can view the no-buffer algorithm as one that treats the subsequences as short, independent realizations, similarly to (Johnson & Willsky, 2014). Following Foti et al. (2014), we create two synthetic datasets both with million observations and latent states (see Fig. 2 (top)).

The first data set, diagonally dominant (DD) consists of a Markov chain that heavily self-transitions and has identifiable emissions. The second dataset—reversed cycles (RC)—strongly transitions between two cycles over three states, each in opposite directions. Further details on these datasets and how we set and are in the Supplement.

The 10-step-ahead predictive log probability is depicted in Fig. 2 for the DD and RC datasets. See the Supplement for similar results comparing errors in transition matrix estimation. In both cases, we see that both SG-RLD HMM methods greatly outperform the i.i.d. algorithm. The reason i.i.d. SG-RLD performs so badly on the DD data stems from all states being equally probable so that ignoring the dynamics forces the model to have little apriori confidence in the next observations. For the RC dataset, the i.i.d. algorithm fails to capture the structured transitions between states, again reducing predictive performance. Importantly, our adaptive buffer scheme attains both better predictive performance and converges to the true transition matrix in less time. In fact, there is a bias in the learned transition matrix for the non-buffered algorithm due to inaccurate subchain approximation of . This experiment demonstrates that accounting for dynamics yields massive gains in predictive performance and that using our adaptive buffer scheme provides further gains on top of that.

Figure 2: Synthetic experiments for DD (left) and RC (right) data. Top: Sample datasets; Arrows indicate Markov transitions. Bottom: 10-step-ahead log predictive likelihood versus learning time for DD (left) and RC (right) dynamics. Comparisons are made for SG-RLD algorithms with adaptive buffer, no buffer, and treating the data as i.i.d..
Figure 3: Synthetic experiment with log-normal emission. We use the non-conjugate emission model on the synthetic data (Top Left) with two hidden states and log-normal emissions and compare it against the conjugate model. We show the difference in convergence speed (Top Right) and log held out probability on test data (Bottom).
Figure 4: Inference of ion channel data Top: SG-RLD segmentation at runtimes: 44.05, 138.51, and 466.82 (sec). Bottom: Batch-RLD segmentation at runtimes: 716.19, 2124.43, and 7245.14 (sec). Right: Held-out-probability of unobserved data points (top) and error decay of transition matrix estimates (bottom) for SG-RLD and batch-RLD methods in loglog scales. denotes the estimated transition parameters after convergence. SG-RLD obtains plausible segmenations and accurate estimates of the transition matrix in a fraction of the time as a batch algorithm.

4.2 Non-conjugate Emission Distributions

We next demonstrate the benefit of our SG-RMCMC algorithm in being able to handle non-conjugate emissions, an essential feature to perform flexible Bayesian analyses. We simulate observations from a -state HMM with log-normal emissions. Details of the parameter settings used to generate the data are in the Supplement. We evaluate the ability of two different HMM models in terms of parameter estimation and model selection accuracy.

The first HMM we consider uses log-normal emissions with non-conjugate normal priors. The second model uses Gaussian emissions with a conjugate normal-inverse-Wishart prior. In Fig. 3 we show that the non-conjugate model obtains accurate estimates of the transition matrix in substantially fewer iterations than the conjugate model. Next, we demonstrate that efficiently handling non-conjugate models leads to improved model selection. Specificallly, we use SG-RLD to fit both the conjugate and non-conjugate HMMs described above with states and compute the marginal likelihood of the observations under each model. In the table of Fig. 3 we see that the non-conjugate model selects the right number of states (), whereas the conjugate model selects a model with more states (). The ability to use non-conjugate HMMs for truly massive data sets has been infeasible until this point and this experiment demonstrates its utility.

4.3 Ion Channel Recordings

We investigate the behavior of the SG-RLD sampler on ion channel recording data. In particular, we consider a 1MHz recording from Rosenstein et al. (2013) of a single alamethicin channel. This data was previously investigated in Palla et al. (2014) and Tripuraneni et al. (2015) using a complicated Bayesian nonparametric HMM. In that work, the authors downsample the data by a factor of and only used and observations respectively due to the challenge of scaling computations to the full sequence. We subsample the time series by a factor of , resulting in observations, to reduce the strong autocorrelations present in the observations that are not captured well by a vanilla HMM. However, our algorithm would have no difficulty handling the full dataset. We further log-transform and normalize the observations to use Gaussian emission.

We use a non-informative flat prior to analyze the ion channel data. In Fig. 4 we see that before the batch-RLD algorithm finishes a single iteration, the SG-RLD algorithm has already converged and generated a reasonable segmentation. With the converged estimation of the transition parameters as reference, we calculated the speed of convergence of SG-RLD and batch-RLD algorithms and found that the SG-RLD is approximately times faster.

5 Discussion

We have developed an SG-MCMC algorithm to perform inference in HMMs for massive observation sequences. The algorithm can be used with non-conjugate emission distributions and is thus applicable to modeling a variety of data. Also, the algorithm asymptotically samples from the true posterior as opposed to variational approaches.

Developing the algorithm relied on three ingredients. First, we derived an efficient approach to estimate the gradient of the marginal likelihood of the HMM from only small subchains. Second, we developed a principled approach using buffers to mitigate the errors introduced when breaking the dependencies at the boundaries of the subchains. Unlike previous heuristic buffering schemes, our approach is theoretically justified using random dynamical systems. Last, we utilize sampling scheme based on the mixing time of the HMM to ensure subchains are approximately independent.

In future work we will extend these ideas to other models of dependent data, such as Markov random fields. Also, the ideas presented here are not limited to MCMC and could be used to develop more principled variational inference algorithms for dependent data.

Acknowledgements

This work was supported by ONR Grant N00014-15-1-2380, NSF CAREER Award IIS-1350133, and by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. NF was supported by a Washington Research Foundation Innovation Postdoctoral Fellowship in Neuroengineering and Data Science.

References

  • Amari (1998) Amari, S. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998.
  • Arnold (1998) Arnold, L. Random Dynamical Systems. Springer, 1998.
  • Beale (2003) Beale, M. J. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, University College London, 2003.
  • Bishop (2006) Bishop, C. M. Pattern Recognition and Machine Learning. Springer, 2006.
  • Broderick et al. (2013) Broderick, T., Boyd, N., Wibisono, A., Wilson, A. C., and Jordan, M. I. Streaming variational Bayes. In Advances in Neural Information Processing Systems, 2013.
  • Bryant & Sudderth (2012) Bryant, M. and Sudderth, E. B. Truly nonparametric online variational inference for hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems, 2012.
  • Chen et al. (2015) Chen, C., Ding, N., and Carin, L. On the convergence of stochastic gradient mcmc algorithms with high-order integrators. In Advances in Neural Information Processing Systems 28, pp. 2278–2286. 2015.
  • Chen et al. (2014) Chen, T., Fox, E. B., and Guestrin, C. Stochastic gradient Hamiltonian Monte Carlo. In Proceeding of 31st International Conference on Machine Learning (ICML’14), 2014.
  • Ding et al. (2014) Ding, N., Fang, Y., Babbush, R., Chen, C., Skeel, R. D., and Neven, H. Bayesian sampling using stochastic gradient thermostats. In Advances in Neural Information Processing Systems 27 (NIPS’14). 2014.
  • Foti et al. (2014) Foti, N. J., Xu, J., Laird, D., and Fox, E. B. Stochastic variational inference for hidden Markov models. In Advances in Neural Information Processing Systems, 2014.
  • Fox (2009) Fox, E.B. Bayesian Nonparametric Learning of Complex Dynamical Phenomena. Ph.D. thesis, MIT, Cambridge, MA, 2009.
  • Gopalan et al. (2012) Gopalan, P. K., Gerrish, S., Freedman, M., Blei, D. M., and Mimno, D. M. Scalable inference of overlapping communities. In Advances in Neural Information Processing Systems, pp. 2249–2257. 2012.
  • Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. Journal of Maching Learning Research, 14(1):1303–1347, May 2013.
  • Hughes et al. (2015) Hughes, M. C., Stephenson, W., and Sudderth, E. B. Scalable adaptation of state complexity for nonparametric hidden Markov models. In Advances in Neural Information Processing Systems, 2015.
  • Johnson & Willsky (2014) Johnson, M. J. and Willsky, A. S. Stochastic variational inference for Bayesian time series models. In International Conference on Machine Learning, 2014.
  • LeGland & Mevel (1997) LeGland, F. and Mevel, L. Exponential forgetting and geometric ergodicity in hidden Markov models. In IEEE Conference on Decision and Control, 1997.
  • Leimkuhler & Shang (2016) Leimkuhler, Benedict and Shang, Xiaocheng. Adaptive thermostats for noisy gradient systems. SIAM Journal on Scientific Computing, 38(2):A712–A736, 2016.
  • Ma et al. (2015) Ma, Y.-A, Chen, T., and Fox, E. B. A complete recipe for stochastic gradient MCMC. In Advances in Neural Information Processing Systems 28, pp. 2899–2907. 2015.
  • Ma et al. (2016) Ma, Y.-A., Fox, E. B., Chen, T., and Wu, L. A unifying framework for devising efficient and irreversible MCMC samplers. arXiv:1608.05973, 2016.
  • Palla et al. (2014) Palla, K., Knowles, D. A., and Ghahramani, Z. A reversible infinite HMM using normalised random measures. In International Conference on Machine Learning, 2014.
  • Patterson & Teh (2013) Patterson, S. and Teh, Y. W. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. In Advances in Neural Information Processing Systems 26 (NIPS’13). 2013.
  • Rabiner (1989) Rabiner, L. R. A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, volume 77, pp. 257–286, 1989.
  • Robbins & Monro (1951) Robbins, H. and Monro, S. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 09 1951.
  • Rosenstein et al. (2013) Rosenstein, J. K., Ramakrishnan, S., Roseman, J., and L, Shepard K. Single ion channel recordings with CMOS-anchored lipid membranes. Nano Letters, 13(6):2682–2686, 2013.
  • Scott (2002) Scott, S. L. Bayesian methods for hidden Markov models: Recursive computing in the 21st century. Journal of the American Statistical Association, 97(457):337–351, 03 2002.
  • Seneta (2006) Seneta, E. Non-negative matrices and Markov chains. Springer Science & Business Media, 2006.
  • Shang et al. (2015) Shang, X., Zhu, Z., Leimkuhler, B., and Storkey, A. Covariance-controlled adaptive Langevin thermostat for large-scale Bayesian sampling. In Advances in Neural Information Processing Systems 28 (NIPS’15). 2015.
  • Tripuraneni et al. (2015) Tripuraneni, N., Gu, S., Ge, H., and Ghahramani, Z. Particle Gibbs for infinite hidden Markov Models. In Advances in Neural Information Processing Systems, pp. 2386–2394, 2015.
  • Vollmer et al. (2016) Vollmer, S. J., Zygalakis, K. C., and Teh, Y. W. Exploration of the (non-)asymptotic bias and variance of stochastic gradient Langevin dynamics. Journal of Machine Learning Research, 17(159):1–48, 2016.
  • Welling & Teh (2011) Welling, M. and Teh, Y. W. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML’11), pp. 681–688, June 2011.
  • (31) Ye, F. X.-F., Ma, Y.-A., and Qian, H. A numerical algorithm for calculating the rate of exponential forgetting in HMM. manuscript in preparation.

Appendix A Gradient of the Posterior

For the hidden Markov model (HMM), the posterior distribution of all hyperparameters

can be calculated by the Bayes rule, where

Since

where denotes the data as real valued vector, and as discrete valued vector with . We can directly marginalize out the hidden variables, , with matrix multiplication as

where is a diagonal matrix and ; is a row vector of ones ( denotes transpose); . Hence the same as Eq. (2), (3) and (8) of the main paper, the posterior distribution is:

When we divide the whole sequence into subsequences of

the posterior can be rewritten as:

(23)

where is the minimum set of covering .

We can then use gradient information of the posterior distribution to construct MCMC algorithms. The gradient of the log-posterior distribution is:

Denote and . Then

(24)

as shown in Eq. (11) of the main paper.

Appendix B Lyapunov Exponent

The question of buffer length is equivalent to: for two random vectors and , what’s the expected length of such that after the application of , and will synchronize? This is a question of random dynamical systems and can be answered through defining the Lyapunov exponent.

We first transform through stereographic projection into dimensions and denote as: . Then operator is projected to new space and the equivalent dynamics over becomes: . We define the Lyapunov exponent through the projected random dynamics as

(25)

where . Measure corresponds to the distribution of the data , and is the invariant measure of under the dynamics of , which will be estimated through sampling.

Once the Lyapunov exponent is calculated, we can set the buffer length:

(26)

where is the error tolerance and is the maximum initial error for probability vectors.

Appendix C Subsequence Sampling Procedure

We use the following sampling procedure to obtain the subsequences used to compute stochastic gradient estimates. In order to enforce the non-overlapping mixing-time constraint between adjacent subsequences, we sample them sequentially. This results in the following form for the probability of the minibatch : , where , . The quantity is calculated as follows:

If , the minimum number of observations required to fit an entire subsequence while respecting minimum gap , . Otherwise, equals to the sum of all the above terms that are less than .

Since , then provides the correct probability of the minibatch .

Appendix D Detailed Descriptions of Experiments

d.1 Evaluating Buffer Effectiveness

Figure 5: Synthetic experiments with hard-to-capture dynamics. Diagonally dominant (DD) (left) and reversed cycles (RC) (right) experiments. First Row: The emission distributions corresponding to different states. Arrows in the RC case indicate the Markov transition structure with transition between bridge states as dashed arrows. Second Row: Decrease of error in transition matrix estimation versus runtime. Comparisons are made for SG-RLD algorithms with estimated buffer, without buffer, and treating data as i.i.d. All of the experiments use a constant computation budget by varying the number of subchains, , with the length of the subchains, .

The first data set, diagonally dominant (DD) consists of a Markov chain that heavily self-transitions. Most subchains in a minibatch thus contain redundant information with observations generated from the same latent state. Although transitions are rarely observed, the emission means are set to be distinct so that this example is likelihood-dominated and highly identifiable. See Fig. 5 (top left). For this data we choose and subsequences in order to incorporate observations from distant parts of the observation sequence. This corresponds to an extreme setting where each gradient is based only on observations. The transition matrix and emission parameters used for this experiment were:

and for all states.

The second dataset we consider contains two reversed cycles (RC): the Markov chain strongly transitions from states and with a small probability of transiting between cycles via bridge states and . See Fig. 5 (top right). The emission means for the two cycles are very similar but occur in reverse order with respect to the transitions. The emission variance is larger, making states and , and , and indiscernible by themselves. Transition information in observing long enough dynamics is thus crucial to identify between states and . Therefore, we set and . Note that same amount of data are used in the calculation of the gradient. The transition matrix and emission parameters were:

and for all states.

We use a non-conjugate flat prior to demonstrate the flexibility of our algorithm. We initialize with a short run of k-means clustering to ensure that different states have different emission parameters.

d.2 Non-conjugate Emission Distribution

For the non-conjugate experiment, we used the following transition matrix:

For emission probability, we use a log-normal distribution:

with parameters: , ; .

In the non-conjugate model, we use the following priors on the emission parameters: .