On Large Lag Smoothing for Hidden Markov Models

04/19/2018 ∙ by Jeremie Houssineau, et al. ∙ University of Cambridge 0

In this article we consider the smoothing problem for hidden Markov models (HMM). Given a hidden Markov chain {X_n}_n≥ 0 and observations {Y_n}_n≥ 0, our objective is to compute E[φ(X_0,...,X_k)|y_0,...,y_n] for some real-valued, integrable functional φ and k fixed, k ≪ n and for some realisation (y_0,...,y_n) of (Y_0,...,Y_n). We introduce a novel application of the multilevel Monte Carlo (MLMC) method with a coupling based on the Knothe-Rosenblatt rearrangement. We prove that this method can approximate the afore-mentioned quantity with a mean square error (MSE) of O(ϵ^2), for arbitrary ϵ>0 with a cost of O(ϵ^-2). This is in contrast to the same direct Monte Carlo method, which requires a cost of O(nϵ^-2) for the same MSE. The approach we suggest is, in general, not possible to implement, so the optimal transport methodology of span is used, which directly approximates our strategy. We show that our theoretical improvements are achieved, even under approximation, in several numerical examples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given a hidden Markov chain , and observations , , we consider a probabilistic model such that for Borel , , for every ,

(1)

with Lebesgue measure and for Borel and all ,

(2)

where we have used the compact notation for any and any sequence

with the convention that the resulting vector of objects is null if

. The model defined by (1) and (2) is termed a hidden Markov model. In this article, given , our objective is to compute for some real-valued, integrable functional and fixed, , which we refer to as large-lag smoothing. Hidden Markov models and the smoothing problem are found in many real applications, such as finance, genetics and engineering; see e.g. [2] and the references therein.

The smoothing problem is notoriously challenging. Firstly, is seldom available analytically and hence numerical methods are required. Secondly, if one wants to compute for several values of , i.e. potentially recursively, then several of the well-known methods for approximation of can fail. For instance the particle filter (e.g. [5] and the references therein) suffers from the well-known path degeneracy problem (see e.g. [12]). Despite this, several methods are available for the approximation of , such as particle Markov chain Monte Carlo [1] or the PaRIS algorithm [14], which might be considered the current state-of-the-art. The latter algorithm relies on approximating for some and is then justified on the basis of using forgetting properties of the smoother (see e.g. [2, 4]). We will extend this notion as will be explained below.

The main approach that is followed in this paper, is to utilize the multilevel Monte Carlo method (e.g. [7, 8]). Traditional applications of this method are associated to discretizations of continuum problems, but we adopt the framework in a slightly non-standard way. To describe the basic idea, suppose one is interested in for a probability, real-valued and bounded, but, one can only hope to approximate with a probability (assumed on the same space as ), and in some loose sense one has approaches as grows. Now, given

a sequence of increasingly more ‘precise’ probability distributions on the same space, one trivially has

The approach is now to sample dependent couplings of independently for and approximate the difference using Monte Carlo. The term is also approximated using Monte Carlo with i.i.d. sampling from . Then, given a ‘good enough’ coupling and a characterization of the bias, for many practical problems the cost to achieve a pre-specified MSE against i.i.d. sampling from and Monte Carlo, is significantly reduced.

We leverage the idea of MLMC where the ‘level’ corresponds to the time parameter and is some chosen , so as to achieve a given level of bias. The main issue is then how to sample from couplings which are good enough. We show that when (the dimension of the hidden state) that using the optimal coupling, in terms of squared Wasserstein distance, can yield significant improvements over the case where one directly approximates with Monte Carlo and i.i.d sampling from the smoother. That is, for given, to achieve a mean square error of , the cost is , whereas for the ordinary Monte Carlo method the cost is . The same conclusion with can be achieved using the Knothe-Rosenblatt rearrangement. The main issue with our approach is that it cannot be implemented for most problems of practical interest. However, using the methodology in [17], it can be approximated. We show that in numerical examples our predicted theory is verified, even under this approximation. We also compare our method directly with PaRIS, showing substantial improvement in terms of cost for a given level of MSE.

This article is structured as follows. In Section 2 we detail our approach and theoretical results. In Section 3 we demonstrate how our approach can be implemented in practice. In Section 4 we give our numerical examples. Section 5 summarizes the article. The appendix includes the assumptions, technical results and proofs of our main results.

1.1 Notations

Let be a measurable space. For we write and as the collection of bounded measurable and Lipschitz functions respectively. For , we write the supremum norm . For , and we write for the set of functions on such that . For , we write the Lipschitz constant . denotes the collection of probability measures on . For a measure on and a , the notation is used. Let be a Markov kernel and be a measure then we use the notations and for , For a sequence of Markov kernels we write

For , the total variation distance is written . For the indicator is written .

denotes the uniform distribution on the set 

.

is the one-dimensional Gaussian distribution of mean

and variance

.

2 Model and Approach

We are given a HMM and we seek to compute

where and for ease of simplicity we suppose that and is a compact subspace of the real line. is the probability density (we also use the same symbol for probability measure) of the smoother given observations at the co-ordinate at time 0. That is

Let be fixed, then we propose to consider

2.1 Case

Let us denote the CDF of as . An approximation of is

where for , and is the (generalized) inverse CDF of . If we do this independently for each

and use an independent estimator

for one can estimate . The utility of the coupling is that it is optimal in terms of 2-Wasserstein distance. We have the following result, where the assumption and proof are in the appendix.

Theorem 2.1.

Assume (AA). Then there exists , such that for any , , , we have

The main implication of the result is the following. In the approach to be considered later in this paper the cost of computing (an approximation of) is per time step. So the cost of this method is . Thus the MSE and cost associated to this algorithm are (at most in the first case)

and

(3)

Let be given. To achieve an MSE of we can choose (here we of course mean , but this is omitted for simplicity) and for any yields that the associated cost is . If one just approximates using

then, to achieve an MSE of the cost would be which is considerably larger if is large. That is, the cost of the ML approach is essentially w.r.t. . If one stops at and uses the estimate

to achieve an MSE of , the cost is . A similar approach can show that these results are even true when smoothing for for fixed (and hence ). The strategy of choosing and detailed above, is the one used throughout the paper. Note that in practice, we do not know , so we choose a value such as which should lead to an which is large enough. This is also the reason for setting and not say.

It is remarked that the compactness of could be removed by using Kellerer’s extension of the Kantorovich-Rubenstein theorem (see [6] for a summary) and then, given that the latter theory is applicable, to show that there exists a , such that for any

where is the collection of functions such that for every , . This can be achieved using the techniques in [10]. Such an extension is mainly of a technical nature and is not required in the continuing exposition. We now establish that the construction here can be extended to the case .

2.2 Case

We consider the Knothe-Rosenblatt rearrangement, which is assumed to exist (see e.g. [17]). For simplicity of notation, we set for some compact . Denote by the conditional CDF of with . Note that here we are dealing with the dimensional co-ordinate at time zero and we considering conditioning on the first of these dimensions. Then to approximate , sample , where for , . Then we have the estimate for

where for ease of notation, we have set , (resp. ) and , , (resp.  , ). We have the following result, whose proof and assumptions are in the appendix.

Theorem 2.2.

Assume (AA-A). Then there exists , such that for any , , , we have

We end this section with some remarks. Firstly, the MLMC strategy could be debiased w.r.t. the time parameter using the trick in [16], which is a straightforward extension. One minor issue with this methodology, is that the variance can blow up in some scenarios. Secondly, the idea of using the approach in [16], when approximating has been adopted in [9]. The authors use a conditional version of the coupled particle filter (e.g. [3, 11]) to couple smoothers, versus the optimal Wasserstein coupling. The goal in [9]

is unbiased estimation which is complementary to ideas in this article, where we focus upon reducing the cost of large lag smoothing.

3 Transport methodology

3.1 Standard Approach

The basic principle of the transport methodology introduced in [17] is to determine a mapping relating a base distribution

, e.g. the normal distribution, to a potentially sophisiticated target distribution

related to the problem of interest. The distribution should be easy to sample from so that, given the map , we can obtain samples from by simply mapping samples from via . More precisely, the considered mapping is characterised by

that is, the push-forward distribution of by is . Such a mapping can be approximated using deterministic or stochastic optimisation methods. However, the underlying optimisation problem is only amenable when the space on which is defined is of a low dimension, e.g. up to . This is not the case in general for the smoothing distributions introduced in the previous sections, especially as the number of observations increases. This is addressed in [17]

by identifying the dependence structure between the random variables of interest. In particular, for a hidden Markov model on

, it is possible to decompose the problem into transport maps of dimension , which does not depend on the number of observations that define the smoother. The problem at time can be solved by introducing a mapping of the form

which will transform the -dimensional base distribution into a target distribution related to the considered hidden Markov model, as detailed below. This target distribution can be expressed as

for any , which can be seen to be the 1-lag smoother. When , we simply define . The base distribution (resp. ) is the standard normal distribution of dimension (resp. ). The mapping can be embedded into the -dimensional identity mapping as

with denoting the matrix transposition. It follows that

is the map such that the pushforward

is equal to the probability density function of the smoother at time

. Obtaining samples from the smoothing distribution is then straightforward: it suffices to sample from and to map the obtained sample via .

Even in low dimension, the optimisation problem underlying the computation of the transport maps of interest is not trivial. One first has to consider an appropriate parametrisation of these maps, e.g. via polynomial representations. The parameters of the considered representation then have to be determined using the following optimisation problem

(4)

where the minimum is taken over the set of monotone increasing lower-triangular maps. This minimisation problem can be solved numerically by considering a parametrised family of maps and deterministic or stochastic optimisation methods. Let be any acceptable map in the minimisation (4) and denote by the th component of , which only depends on the th first variables, , then the considered parametrisation can be expressed as

for some real-valued functions and on and respectively. It is assumed that the functions and are Hermite Probabilists’ functions extended with constant and linear components for any , and the function is also a Hermite Probabilists’ function which is only extended with a constant component. In particular, these functions take the form

with the map order, with and some collections of real coefficients and with and basis functions based on the above mentioned Hermite Probabilists’ functions. The expectation in (4) is then approximated using a Gauss quadrature of order in each dimension and the minimisation is solved via the Newton algorithm using the conjugate-gradient method for each step.

The desired function can be recovered through the relation

where and is the linear map corresponding to the permutation matrix of , which verifies .

3.2 Fixed-Point Smoothing with Transport Maps

The approach described in Section 3.1 allows for obtaining samples from the distribution of given by simply retaining the first components of samples from after mapping them through . However, the computational cost associated with the mapping of samples by increases with , making the complexity of the method of the order .

This can however be addressed by considering as a parameter and by only propagating the transport map corresponding to the posterior distribution of . This approach has been suggested in [17, section 7.4]. We assume in the remainder of this section that observations start at time step instead of . When considering as a parameter, the elementary transport maps take the form

and the corresponding target distributions become

and

for any . The transport map associated with the posterior distribution of is

By recursively approximating the composition by a single map, the computation of samples from the posterior distribution of becomes linear in time. The pseudo-code for this approach is given in Algorithm 1.

1:input: , ,
2:Output: estimate of
3:
4:for  do
5:     if  then
6:         
7:     else
8:         
9:          is the second component of
10:     end if
11:     
12:     
13:      Compute transport map from to the law of based on
14:      Compute the number of samples
15:     for  do
16:         
17:         
18:         if  then
19:               Map the first components of through
20:         else
21:              
22:              
23:         end if
24:     end for
25:     
26:end for
Algorithm 1 Multilevel transport

4 Case Studies

4.1 Linear Gaussian

4.1.1 Theoretical Result

The results in Section 2 do not apply to the linear Gaussian case. We extend our results to this scenario. We assume that the dynamical and observations models are one-dimensional as well as linear and Gaussian such that the state and observation random variables at time can be defined as

and , for some and some . We have the following result, whose proof is in the appendix.

Theorem 4.1.

Assuming that for all large enough, it holds that

Theorem 4.1 shows that, under assumptions on the parameters of the model, the variance of the approximated multilevel term at level tends to exponentially fast in and with an order of for the number of samples. This theorem also indicates that the behaviour depends an all the parameters in the model, although implicitly in . For instance, if then one can consider in the above expression. The assumption about the variance of the filter can be justified in terms of reachability and observability of the system [13].

This rate can get extremely beneficial for the proposed approach when is large and is small, however it can also make it of little use in the opposite case. This does not come as a surprise since a large means that the initial condition is quickly forgotten so that obtaining a high number of samples from the smoother for large would be inefficient, whereas small values of incur a much higher dependency between the initial state and the observations at different time steps.

4.1.2 Numerical Results

The performance of the proposed method is first assessed in the linear-Gaussian case where an analytical solution of the fixed-point smoothing problem is available, this solution being known as the Rauch-Tung-Striebel smoother [15]. More specifically, we consider the following model:

with , where and . The transport maps of interest are approximated to the order while the expectation is approximated to the order and the minimisation is performed with a tolerance of . The number of samples at each time step as well as the time horizon is computed according to the method proposed in Section 2.1 with different values for the parameter . The performance of the proposed method is compared against the PaRIS algorithm introduced in [14] using the observations with a varying number of samples and with terms for the propagation of the estimate of . In the simulations, it always holds that to ensure the fairness of the comparison. The criteria for performance assessment is the MSE at the final time step, defined as

where is the number of Monte Carlo simulations, is the estimate of (with for the PaRIS algorithm) and where is the corresponding estimate given by the Rauch-Tung-Striebel smoother.

The values of the MSE at the final time obtained in simulations are shown in Figure 1 where the proposed approach displays smaller errors than the PaRIS algorithm for different values of and . The advantage when representing the probability distributions of interest with transport maps is that the computational effort required to obtain a sample is extremely limited once the maps have been determined. For instance, the highest and lowest considered values of in Figure 1 correspond to and samples respectively, which induces a comparatively small increase in computational time.

Figure 1: Performance of the proposed method against the PaRIS algorithm with the linear-Gaussian model, averaged over 100 Monte Carlo simulations. The reference for the computation of the MSE is the Rauch-Tung-Striebel smoother. The displayed cost for the multilevel approach includes the computation of the transport maps.

4.2 Stochastic Volatility Model

In order to further demonstrate the performance of the proposed approach, the assessment conducted in the previous section is applied to a non-linear case. A stochastic volatility model is considered with

with and , where , and . In the absence of an analytical solution, the reference is determined by the PaRIS algorithm with samples. Since the observation process of this model is generally less informative than the one of the Gaussian model, the PaRIS algorithm is given the observations up to the time step and, similarly, it is ensured that for the proposed approach. The other parameters are the same as in the linear-Gaussian case.

The MSE at the final time obtained for the two considered methods is shown in Figure 2. Once again, the error for the proposed approach is lower than for the PaRIS algorithm although the difference is less significant. In particular, the gain in accuracy between the lowest and the second lowest value of seem to indicate that simply increasing the number of samples would not allow for reducing the error much further. However, increasing the order of the transport maps or decreasing the tolerance in the optimisation could further reduce the error, although with a significantly higher computational cost.

Figure 2: Performance of the proposed method against the PaRIS algorithm with the stochastic volatility model, averaged over 100 Monte Carlo simulations. The reference for the computation of the MSE is the PaRIS algorithm with samples. The displayed cost for the multilevel approach includes the computation of the transport maps.

The computational costs obtained for the two models considered in simulations are shown in Figure 3 for different values of . These results confirm the order that was predicted in Section 2.

Figure 3: Computational cost as a function of , averaged over 100 Monte Carlo simulations. The fitted curves are based on a function of the form , with and some parameters, which is justified by the form of the cost (3).

5 Summary

In this article we have considered large lag smoothing for HMMs, using the MLMC method. We showed that under an optimal coupling when the hidden state is in dimension 1 or higher, but on a compact space that, essentially, the cost can be decoupled from the time parameter of the smoother. As this optimal method is not possible in practice, we showed how it could be approximated and established numerically that our theory still holds in this approximated case. Several extensions to the work are possible. Firstly, to extend our theoretical results to the case of the approximated coupling. Secondly, to investigate whether the coupling used in [9] can also yield, theoretically, the same improvements that have been seen in the work in this article.

Acknowledgements

All authors were supported by Singapore Ministry of Education AcRF tier 1 grant R-155-000-182-114. AJ is affiliated with the Risk Management Institute, OR and analytics cluster and the Center for Quantitative Finance at NUS.

Appendix A Variance Proofs

We write the density (or probability measure) of the smoother, at time , on the co-ordinate at time zero as and the associated CDF as (with generalized inverse ). Recall that throughout is a compact subspace of . Throughout the observations are fixed and often omitted from the notations. The appendix gives our main assumptions, followed by a technical Lemma (Lemma A.1) which features some technical results used in the proofs. Then the proof of Theorem 2.1 is given. The appendix is concluded by a second technical Lemma (Lemma A.2) followed by the proof of Theorem 2.2.

  • There exists such that

  • There exists such that for every

Below