Mixed Membership Models for Time Series

In this article we discuss some of the consequences of the mixed membership perspective on time series analysis. In its most abstract form, a mixed membership model aims to associate an individual entity with some set of attributes based on a collection of observed data. Although much of the literature on mixed membership models considers the setting in which exchangeable collections of data are associated with each member of a set of entities, it is equally natural to consider problems in which an entire time series is viewed as an entity and the goal is to characterize the time series in terms of a set of underlying dynamic attributes or "dynamic regimes". Indeed, this perspective is already present in the classical hidden Markov model, where the dynamic regimes are referred to as "states", and the collection of states realized in a sample path of the underlying process can be viewed as a mixed membership characterization of the observed time series. Our goal here is to review some of the richer modeling possibilities for time series that are provided by recent developments in the mixed membership framework.



There are no comments yet.


page 10

page 22

page 23


Threshold factor models for high-dimensional time series

We consider a threshold factor model for high-dimensional time series in...

Kalman Filtering of Distributed Time Series

This paper aims to introduce an application to Kalman Filtering Theory, ...

Mixed Membership Distribution-Free model

We consider the problem of detecting latent community information of mix...

Phenotyping Endometriosis through Mixed Membership Models of Self-Tracking Data

We investigate the use of self-tracking data and unsupervised mixed-memb...

Partially Hidden Markov Chain Linear Autoregressive model: inference and forecasting

Time series subject to change in regime have attracted much interest in ...

Dynamic Infinite Mixed-Membership Stochastic Blockmodel

Directional and pairwise measurements are often used to model inter-rela...

Unified Treatment of Hidden Markov Switching Models

Many real-world problems encountered in several disciplines deal with th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this article we discuss some of the consequences of the mixed membership perspective on time series analysis. In its most abstract form, a mixed membership model aims to associate an individual entity with some set of attributes based on a collection of observed data. For example, a person (entity) can be associated with various defining characteristics (attributes) based on observed pairwise interactions with other people (data). Likewise, one can describe a document (entity) as comprised of a set of topics (attributes) based on the observed words in the document (data). Although much of the literature on mixed membership models considers the setting in which exchangeable collections of data are associated with each member of a set of entities, it is equally natural to consider problems in which an entire time series is viewed as an entity and the goal is to characterize the time series in terms of a set of underlying dynamic attributes or dynamic regimes. Indeed, this perspective is already present in the classical hidden Markov model (Rabiner, 1989) and switching state-space model (Kim, 1994), where the dynamic regimes are referred to as “states,” and the collection of states realized in a sample path of the underlying process can be viewed as a mixed membership characterization of the observed time series. Our goal here is to review some of the richer modeling possibilities for time series that are provided by recent developments in the mixed membership framework.

Much of our discussion centers around the fact that while in classical time series analysis it is commonplace to focus on a single time series, in mixed membership modeling it is rare to focus on a single entity (e.g., a single document); rather, the goal is to model the way in which multiple entities are related according to the overlap in their pattern of mixed membership. Thus we take a nontraditional perspective on time series in which the focus is on collections of time series. Each individual time series may be characterized as proceeding through a sequence of states, and the focus is on relationships in the choice of states among the different time series.

As an example that we review later in this article, consider a multivariate time series that arises when position and velocity sensors are placed on the limbs and joints of a person who is going through an exercise routine. In the specific dataset that we discuss, the time series can be segmented into types of exercise (e.g., jumping jacks, touch-the-toes, and twists). Each person may select a subset from a library of possible exercise types for their individual routine. The goal is to discover these exercise types (i.e., the “behaviors” or “dynamic regimes”) and to identify which person engages in which behavior, and when. Discovering and characterizing “jumping jacks” in one person’s routine should be useful in identifying that behavior in another person’s routine. In essence, we would like to implement a combinatorial form of shrinkage involving subsets of behaviors selected from an overall library of behaviors.

Another example arises in genetics, where mixed membership models are referred to as “admixture models” (Pritchard et al., 2000). Here the goal is to model each individual genome as a mosaic of marker frequencies associated with different ancestral genomes. If we wish to capture the dependence of nearby markers along the genome, then the overall problem is that of capturing relationships among the selection of ancestral states along a collection of one-dimensional spatial series.

One approach to problems of this kind involves a relatively straightforward adaptation of hidden Markov models or other switching state-space models into a Bayesian hierarchical model: transition and emission (or state-space) parameters are chosen from a global prior distribution and each individual time series either uses these global parameters directly or perturbs them further. This approach in essence involves using a single global library of states, with individual time series differing according to their particular random sequence of states. This approach is akin to the traditional Dirichlet-multinomial framework that is used in many mixed-membership models. An alternative is to make use of a beta-Bernoulli framework in which each individual time series is modeled by first selecting a subset of states from a global library and then drawing state sequences from a model defined on that particular subset of states. We will overview both of these approaches in the remainder of the article.

While much of our discussion is agnostic to the distinction between parametric and nonparametric models, our overall focus is on the nonparametric case. This is because the model choice issues that arise in the multiple time series setting can be daunting, and the nonparametric framework provides at least some initial control over these issues. In particular, in a classical state-space setting we would need to select the number of states for each individual time series, and do so in a manner that captures partial overlap in the selected subsets of states among the time series. The nonparametric approach deals with these issues as part of the model specification rather than as a separate model choice procedure.

The remainder of the article is organized as follows. In Section 2.1, we review a set of time series models that form the building blocks for our mixed membership models. The mixed membership analogy for time series models is aided by relating to a canonical mixed membership model: latent Dirichlet allocation (LDA), reviewed in Section 2.2. Bayesian nonparametric variants of LDA are outlined in Section 2.3. Building on this background, in Section 3 we turn our focus to mixed membership in time series. We first present Bayesian parametric and nonparametric models for single time series in Section 3.1 and then for collections of time series in Section 3.3. Section 4 contains a brief survey of related Bayesian and Bayesian nonparametric time series models.

2 Background

In this section we provide a brief introduction to some basic terminology from time series analysis. We also overview some of the relevant background from mixed membership modeling, both parametric and nonparametric.

2.1 State-Space models

The autoregressive (AR) process is a classical model for time series analysis that we will use as a building block. An AR model assumes that each observation is a function of some fixed number of previous observations plus an uncorrelated innovation. Specifically, a linear, time-invariant AR model has the following form:


where represents a sequence of equally spaced observations, the uncorrelated innovations, and

the time-invariant autoregressive parameters. Often one assumes normally distributed innovations

, further implying that the innovations are independent.

A more general formulation is that of linear state space models, sometimes referred to as dynamic linear models

. This formulation, which is closely related to autoregressive moving average processes, assumes that there exists an underlying state vector

such that the past and future of the dynamical process are conditionally independent. A linear time-invariant state space model is given by


where and are independent, zero-mean Gaussian noise processes with covariances and , respectively. Here, we assume a vector-valued process. One could likewise consider a vector-valued AR process, as we do in Section 3.1.

There are several ways to move beyond linear state space models. One approach is to consider smooth nonlinear functions in place of the matrix multiplication in linear models. Another approach, which is our focus here, is to consider regime-switching models based on a latent sequence of discrete states . In particular, we consider Markov switching processes

where the state sequence is modeled as Markovian. If the entire state is a discrete random variable, and the observations

are modeled as being conditionally independent given the discrete state, then we are in the realm of hidden Markov models (HMMs) (Rabiner, 1989). Details of the HMM formulation are expounded upon in Section 3.1.

It is also useful to consider hybrid models in which the state contains both discrete and continuous components. We will discuss an important example of this formulation—the autoregressive HMM—in Section 3.1. Such models can be viewed as a collection of AR models, one for each discrete state. We will find it useful to refer to the discrete states as “dynamic regimes” or “behaviors” in the setting of such models. Conditional on the value of a discrete state, the model does not merely produce independent observations, but exhibits autoregressive behavior.

2.2 Latent Dirichlet Allocation

In this section, we briefly overview the latent Dirichlet allocation (LDA) model (Blei et al., 2003) as a a canonical example of a mixed membership model. We use the language of “documents,” “topics,” and “words.” In contrast to hard-assignment predecessors that assumed each document was associated with a single topic category, LDA aims to model each document as a mixture of topics. Throughout this article, when describing a mixed membership model, we seek to define some observed quantity as an entity that is allowed to be associated with, or have membership characterized by, multiple attributes. For LDA, the entity is a document and the attributes are a set of possible topics. Typically, in a mixed membership model each entity represents a set of observations and a key question is what structure is imposed on these observations. For LDA, each document is a collection of observed words and the model makes a simplifying exchangeability assumption in which the ordering of words is ignored.

Specifically, LDA associates each document  with a latent distribution over the possible topics, , and each topic is associated with a distribution over words in the vocabulary, . Each word  is then generated by first selecting a topic from the document-specific topic distribution, and then selecting a word from the topic-specific word distribution.

Formally, the standard LDA model with topics, documents, and words per document  is given as


Here is a topic indicator variable associated with observed word , indicating which topic  generated this th word in document . In expectation, for each document we have . That is, the expected topic proportions for each document are identical a priori.

2.3 Bayesian Nonparametric Mixed Membership Models

The LDA model of Equation (3) assumes a finite number of topics . Bayesian nonparametric methods allow for extensions to models with an unbounded number of topics. That is, in the mixed membership analogy, each entity can be associated with a potentially countably infinite number of attributes. We review two such approaches: one based on the hierarchical Dirichlet process (Teh et al., 2006) and the other based on the beta process (Hjort, 1990; Thibaux and Jordan, 2007). In the latter case, the association of entities with attributes is directly modeled as sparse.

Hierarchical Dirichlet Process Topic Models

To allow for a countably infinite collection of topics, in place of finite-dimensional topic-distributions as specified in Equation (3), one wants to define distributions whose support lies on a countable set, .

The Dirichlet process (DP), denoted by

, provides a distribution over countably infinite discrete probability measures


defined on a parameter space with base measure . The mixture weights are sampled via a stick-breaking construction (Sethuraman, 1994):


This can be viewed as dividing a unit-length stick into lengths given by the weights : the weight is a random proportion of the remaining stick after the first weights have been chosen. We denote this distribution by . See Figure 1 for a pictorial representation of this process.

Figure 1: Pictorial representation of the stick-breaking construction of the Dirichlet process.

Drawing indicators , one can integrate the underlying random stick-breaking measure to examine the predictive distribution of conditioned on a set of indicators and the DP concentration parameter . The resulting sequence of partitions is described via the Chinese restaurant process (CRP) (Pitman, 2002), which provides insight into the clustering properties induced by the DP.

For the LDA model, recall that each is a draw from a Dirichlet distribution (here denoted generically by ) and defines a distribution over the vocabulary for topic . To define a model for multiple documents, one might consider independently sampling for each document , where each of these random measures is of the form . Unfortunately, the topic-specific word distribution for document , , is necessarily different from that of document , , since each are independent draws from the base measure . This is clearly not a desirable model—in a mixed membership model we want the parameter that describes each attribute (topic) to be shared between entities (documents).

(a) (b)
Figure 2: Graphical model of the (a) HDP-based and (b) beta-process-based topic model. The HDP-LDA model specifies a global topic distribution and draws document-specific topic distributions as . Each word in document is generated by first drawing a topic-indicator and then drawing from the topic-specific word distribution: . The standard LDA model arises as a special case when is fixed to a finite measure . The beta process model specifies a collection of sparse topic distributions. Here, the beta process measure is represented by its masses and locations , as in Equation (8). The features are then conditionally independent draws , and are used to define document-specific topic distributions . Given the topic distributions, the generative process for the topic-indicators and words is just as in the HDP-LDA model.

One method of sharing parameters between documents while allowing for document-specific topic weights is to employ the hierarchical Dirichlet process (HDP) (Teh et al., 2006). The HDP defines a shared set of parameters by drawing independently from . The weights are then specified as


Coupling this prior to the likelihood used in the LDA model, we obtain a model that we refer to as HDP-LDA. See Figure 2(a) for a graphical model representation, and Figure 3 for an illustration of the coupling of document-specific topic distributions via the global stick-breaking distribution . Letting and , one can show that the specification of Equation (6) is equivalent to defining a hierarchy of Dirichlet processes (Teh et al., 2006):


Thus the name hierarchical Dirichlet process. Note that there are many possible alternative formulations one could have considered to generate different countably infinite weights with shared atoms . The HDP is a particularly simple instantiation of such a model that has appealing theoretical and computational properties due to its interpretation as a hierarchy of Dirichlet processes.

Figure 3: Illustration of the coupling of the document-specific topic distributions via the global stick-breaking distribution . Each topic distribution has countably infinite support and, in expectation, .

Via the construction of Equation (6), we have that . That is, all of the document-specific topic distributions are centered around the same stick-breaking weights .

Beta-Bernoulli Process Topic Models

The HDP-LDA model defines countably infinite topic distributions in which every topic  has positive mass (see Figure 3). This implies that each entity (document) is associated with infinitely many attributes (topics). In practice, however, for any finite length document  only a finite subset of the topics will be present. The HDP-LDA model implicitly provides such attribute counts through the assignment of words to topics via the indicator variables .

As an alternative representation that more directly captures the inherent sparsity of association between documents and topics, one can consider feature-based Bayesian nonparametric variants of LDA via the beta-Bernoulli process, such as in the focused topic model of Williamson et al. (2010). (A precursor to this model was presented in the time series context by Fox et al. (2010), and is discussed in Section 3.3.) In such models, each document is endowed with an infinite-dimensional binary feature vector that indicates which topics are associated with the given document. In contrast to HDP-LDA, this formulation directly allows each document to be represented as a sparse mixture of topics. That is, there are only a few topics that have positive probability of appearing in any document.

Informally, one can think of the beta process (BP) (Hjort, 1990; Thibaux and Jordan, 2007) as defining an infinite set of coin-flipping probabilities and a Bernoulli process realization as corresponding to the outcome from an infinite coin-flipping sequence based on the beta-process-determined coin-tossing probabilities. The set of resulting heads indicate the set of selected features, and implicitly defines an infinite-dimensional feature vector. The properties of the beta process induce sparsity in the feature space by encouraging sharing of features among the Bernoulli process realizations.

More formally, let be an infinite-dimensional feature vector associated with document , where if and only if document  is associated with topic . The beta process, denoted , provides a distribution on measures


with . We interpret as the feature-inclusion probability for feature (e.g., the th topic in an LDA model). This th feature is associated with parameter .

The collection of points are a draw from a non-homogeneous Poisson process with rate defined on the product space . Here, and is a base measure with total mass . Since the rate measure has infinite mass, the draw from the Poisson process yields an infinite collection of points, as in Equation (8). For an example realization and its associated cumulative distribution, see Figure 4. One can also interpret the beta process as the limit of a finite model with features:


In the limit as , and one can define stick-breaking constructions analogous to those in the Dirichlet process (Paisley et al., 2010; Paisley et al., 2011).

For each feature , we independently sample


That is, with probability , topic  is associated with document . One can visualize this process as walking along the atoms of the discrete beta process measure and, at each atom , flipping a coin with probability of heads given by . More formally, setting , this process is equivalent to sampling from a Bernoulli process with base measure : . Example realizations are shown in Figure 4(a).

(a) (b)
Figure 4: (a) Top: A draw from a beta process is shown in blue, with the corresponding cumulative distribution in red. Bottom: 50 draws from a Bernoulli process using the beta process realization. Each blue dot corresponds to a coin-flip at that atom in that came up heads. (b) An image of a feature matrix associated with a realization from an Indian buffet process with . Each row corresponding to a different customer, and each column a different dish. White indicates a chosen feature.

The characteristics of this beta-Bernoulli process define desirable traits for a Bayesian nonparametric featural model: we have a countably infinite collection of coin-tossing probabilities (one for each of our infinite number of features) defined by the beta process, but only a sparse, finite subset are active in any Bernoulli process realization. In particular, one can show that has finite expected mass implying that there are only a finite number of successes in the infinite coin-flipping sequence that defines . Likewise, the sparse set of features active in are likely to be similar to those of (an independent draw from ), though variability is clearly possible. Finally, the beta process is conjugate to the Bernoulli process (Kim, 1999), which implies that one can analytically marginalize the latent random beta process measure and examine the predictive distribution of given and the concentration parameter . As established by Thibaux and Jordan (2007), the marginal distribution on the obtained from the beta-Bernoulli process is the Indian buffet process (IBP) of Griffiths and Ghahramani (2005), just as the marginalization of the Dirichlet-multinomial process yields the Chinese restaurant process. The IBP can be useful in developing posterior inference algorithms and a significant portion of the literature is written in terms of the IBP representation.

Figure 5: Illustration of generating the sparse document-specific topic distributions via the beta process specification. Each document’s binary feature vector limits the support of the topic distribution to the sparse set of selected topics. The non-zero components are Dirichlet distributed with hyperparmeters given by the corresponding subset of . See Equation (11).

Returning to the LDA model, one can obtain the focused topic model of Williamson et al. (2010) within the beta-Bernoulli process framework as follows:


where Williamson et al. (2010) treat as random according to . Here, is the feature vector associated with and represents a Dirichlet distribution defined solely over the components indicated by

, with hyperparameters the corresponding subset of

. This implies that is a distribution with positive mass only on the sparse set of selected topics. See Figure 5. Given , the and are generated just as in Equation (3). As before, we take . The graphical model is depicted in Figure 2(b).

3 Mixed Membership in Time Series

Building on the background provided in Section 2, we can now explore how ideas of mixed membership models can be used in the time series setting. Our particular focus is on time series that can be well described using regime-switching models. For example, stock returns might be modeled as switches between regimes of volatility or an EEG recording between spiking patterns dependent on seizure type. For the exercise routines scenario, people switch between a set of actions such as jumping jacks, side twists, and so on. In this section, we present a set of regime-switching models for describing such datasets, and show how one can interpret the models as providing a form of mixed membership for time series.

To form the mixed membership interpretation, we build off of the canonical example of LDA from Section 2.2. Recall that for LDA, the entity of interest is a document and the set of attributes are the possible topics. Each document is then modeled as having membership in multiple topics (i.e., mixed membership). For time series analysis, the equivalent analogy is that the entity is the time series , which we denote compactly by . Just as a document is a collection of observed words, a time series is a sequence of observed data points of various forms depending upon the application domain. We take the attributes of a time series to be the collection of dynamic regimes (e.g., jumping jacks, arm circles, etc.). Our mixed membership time series model associates a single time series with a collection of dynamic regimes. However, unlike in text analysis, it is unreasonable to assume a bag of words representation for time series since the ordering of the data points is fundamental to the description of each dynamic regime.

The central defining characteristics of a mixed membership time series model are (i) the model used to describe each dynamic regime, and (ii) the model used to describe the switches between regimes. In Section 3.1 and in Section 3.2 we choose one switching model and explore multiple choices for the dynamic regime model. Another interesting question explored in Section 3.3 is how to jointly model multiple time series. This question is in direct analogy to the ideas behind the analysis of a corpus of documents in LDA.

3.1 Markov Switching Processes as a Mixed Membership Model

A flexible yet simple regime-switching model for describing a single time series with such patterned behaviors is the class of Markov switching processes. These processes assume that the time series can be described via Markov transitions between a set of latent dynamic regimes which are individually modeled via temporally independent or linear dynamical systems. Examples include the hidden Markov model (HMM), switching vector autoregressive (VAR) process, and switching linear dynamical system (SLDS)111These processes are sometimes referred to as Markov jump-linear systems (MJLS) within the control theory community.. These models have proven useful in such diverse fields as speech recognition, econometrics, neuroscience, remote target tracking, and human motion capture.

Hidden Markov Models

The hidden Markov model, or HMM, is a class of doubly stochastic processes based on an underlying, discrete-valued state sequence that is modeled as Markovian (Rabiner, 1989). Conditioned on this state sequence, the model assumes that the observations, which may be discrete or continuous valued, are independent. Specifically, let denote the state, or dynamic regime

, of the Markov chain at time 

and let denote the state-specific transition distribution for state . Then, the Markovian structure on the state sequence dictates that


Given the state , the observation is a conditionally independent emission


for an indexed family of distributions . Here, are the emission parameters for state .

A Bayesian specification of the HMM might further assume


independently for each HMM state .

The HMM represents a simple example of a mixed membership model for time series: a given time series (entity) is modeled as having been generated from a collection of dynamic regimes (attributes), each with different mixture weights. The key component of the HMM, which differs from standard mixture models such as in LDA, is the fact that there is a Markovian structure to the assignment of data points to mixture components (i.e., dynamic regimes). In particular, the probability that observation is generated from the dynamic regime associated with state (via an assignment ) is dependent upon the previous state . As such, the mixing proportions for the time series are defined by the transition matrix with rows . This is in contrast to the LDA model in which the mixing proportions for a given document are simply captured by a single vector of weights.

Switching VAR Processes

The modeling assumption of the HMM that observations are conditionally independent given the latent state sequence is often insufficient in capturing the temporal dependencies present in many datasets. Instead, one can assume that the observations have conditionally linear dynamics. The latent HMM state then models switches between a set of such linear models in order to capture more complex dynamical phenomena. We restrict our attention in this article to switching vector autoregressive (VAR) processes, or autoregressive HMMs (AR-HMMs), which are broadly applicable in many domains while maintaining a number of simplifying properties that make them a practical choice computationally.

We define an AR-HMM, with switches between order- vector autoregressive processes 222We denote an order- VAR process by VAR()., as


where represents the HMM latent state at time , and is defined as in Equation (12). The state-specific additive noise term is distributed as . We refer to as the set of lag matrices. Note that the standard HMM with Gaussian emissions arises as a special case of this model when for all .

3.2 Hierarchical Dirichlet Process HMMs

In the HMM formulation described so far, we have assumed that there are possible different dynamical regimes. This begs the question: what if this is not known, and what if we would like to allow for new dynamic regimes to be added as more data are observed? In such scenarios, an attractive approach is to appeal to Bayesian nonparametrics. Just as the hierarchical Dirchlet process (HDP) of Section 2.3 allowed for a collection of countably infinite topic distributions to be defined over the same set of topic parameters, one can employ the HDP to define an HMM with a set of countably infinite transition distributions defined over the same set of HMM emission parameters.

In particular, the HDP-HMM of Teh et al. (2006) defines


The evolution of the latent state and observations are just as in Equations (12) and (13). Informally, the Dirichlet process part of the HDP allows for this unbounded state space and encourages the use of only a spare subset of these HMM states. The hierarchical layering of Dirichlet processes ties together the state-specific transition distribution (via ), and through this process, creates a shared sparse state space.

The induced predictive distribution for the HDP-HMM state , marginalizing the transition distributions , is known as the infinite HMM urn model (Beal et al., 2002). In particular, the HDP-HMM of Teh et al. (2006) provides an interpretation of this urn model in terms of an underlying collection of linked random probability measures. However, the HDP-HMM omits the self-transition bias of the infinite HMM and instead assumes that each transition distribution is identical in expectation (), implying that there is no differentiation between self-transitions and moves between different states. When modeling data with state persistence, as is common in most real-world datasets, the flexible nature of the HDP-HMM prior places significant mass on state sequences with unrealistically fast dynamics.

To better capture state persistence, the sticky HDP-HMM of Fox et al. (2008, 2011b) restores the self-transition parameter of the infinite HMM of Beal et al. (2002) and specifies


where indicates that an amount is added to the th component of . In expectation,


Here, is the discrete Kronecker delta. From Equation (18), we see that the expected transition distribution has weights which are a convex combination of the global weights defined by and state-specific weight defined by the sticky parameter . When , the original HDP-HMM of Teh et al. (2006) is recovered. The graphical model for the sticky HDP-HMM is displayed in Figure 6(a).

(a) (b)
Figure 6: Graphical model of (a) the sticky HDP-HMM and (b) an HDP-based AR-HMM. In both cases, the state evolves as , where and . For the sticky HDP-HMM, the observations are generated as whereas the HDP-AR-HMM assumes conditionally VAR dynamics as in Equation (15), specifically in this case with order .

One can also consider sticky HDP-HMMs with Dirichlet process mixture of Gaussian emissions (Fox et al., 2011b). Recently, HMMs with Dirichlet process emissions were also considered in Yau et al. (2011), along with efficient sampling algorithms for computations. Building on the sticky HDP-HMM framework, one can similarly consider HDP-based variants of the switching VAR process and switching linear dynamical system, such as represented in Figure 6(b); see Fox et al. (2011a) for further details. For the HDP-AR-HMMFox et al. (2011a) consider methods that allow for switching between VAR processes of unknown and potentially variable order.

3.3 A Collection of Time Series

In the mixed membership time series models considered thus far, we have assumed that we are interested in the dynamics of a single (potentially multivariate) time series. However, as in LDA where one assumes a corpus of documents, in a growing number of fields the focus is on making inferences based on a collection

of related time series. One might monitor multiple financial indices, or collect EEG data from a given patient at multiple non-contiguous epochs. Recalling the exercise routines example, one might have a dataset consisting of multiple time series obtained from multiple individuals, each of whom performs some subset of exercise types. In this scenario, we would like to take advantage of the overlap between individuals, such that if a “jumping jack” behavior is discovered in the time series for one individual then it can be used in modeling the data for other individuals. More generally, one would like to discover and model the dynamic regimes that are shared among several related time series. The benefits of such joint modeling are twofold: we may more robustly estimate representative dynamic models in the presence of limited data, and we may also uncover interesting relationships among the time series.

Recall the basic finite HMM of Section 3.1 in which the transition matrix defined the dynamic regime mixing proportions for a given time series. To develop a mixed membership model for a collection of time series, we again build on the LDA example. For LDA, the document-specific mixing proportions over topics are specified by . Analogously, for each time series , we denote the time-series specific transition matrix as with rows . That is, for time series , denotes the transition distribution from state  to each of the possible next states. Just as LDA couples the document-specific topic distributions under a common Dirichlet prior, we can couple the rows of the transition matrix as


A similar idea holds for extending the HDP-HMM to collections of time series. In particular, we can specify


Analogously to LDA, both the finite and infinite HMM specifications above imply that the expected transition distributions are identical between time series (). Here, however, the expected transition distributions are also identical between rows of the transition matrix.

To allow for state-specific variability in the expected transition distribution, one could similarly couple sticky HDP-HMMs, or consider a finite variant of the model via the weak-limit approximation (see Fox et al. (2011b) for details on finite truncations). Alternatively, one could independently center each row of the time-series-specific transition matrix around a state-specific distribution. For the finite model,


For the infinite model, such a specification is more straightforwardly presented in terms of the Dirichlet random measures. Let , with the time-series-specific transition distribution and the set of HMM emission parameters. Over the collection of time series, we center around a common state--specific transition measure . Then, each of the infinite collection of state-specific transition measures are centered around a global measure . Specifically,


Such a hierarchy allows for more variability between the transition distributions than the specification of Equation (20) by only directly coupling state-specific distributions between time series. The sharing of information between states occurs at a higher level in the latent hierarchy (i.e., one less directly coupled to observations).

Although straightforward extensions of existing models, the models presented in this section have not been discussed in the literature to the best of our knowledge. Instead, typical models for coupling multiple time series, each modeled via an HMM, rely on assuming exact sharing of the same transition matrix. (In the LDA framework, that would be equivalent to a model in which every document shared the same topic weights, .) With such a formulation, each time series (entity) has the exact same mixed membership with the global collection of dynamic regimes (attributes).

Alternatively, models have been proposed in which each time series  is hard-assigned to one of some distinct HMMs, where each HMM is comprised of a unique set of states and corresponding transition distributions and emission parameters. For example, Qi et al. (2007) and Lennox et al. (2010) examine a Dirichlet process mixture of HMMs, allowing to be unbounded. Based on a fixed assignment of time series to some subset of the global collection of HMMs, this model reduces to examples of exact sharing of HMM parameters, where is the number of unique HMMs assigned. That is, there are clusters of time series with the exact same mixed membership among a set of attributes (i.e., dynamic regimes) that are distinct between the clusters.

By defining a global collection of dynamic regimes and time-series-specific transition distributions, the formulations proposed above instead allow for commonalities between parameterizations while maintaining time-series-specific variations in the mixed membership. These ideas more closely mirror the LDA mixed membership story for a corpus of documents.

The Beta-Bernoulli Process HMM

Analogously to HDP-LDA, the HDP-based models for a collection of (or a single) time series assume that each time series has membership with an infinite collection of dynamic regimes. This is due to the fact that each transition distribution has positive mass on the countably infinite collection of dynamic regimes. In practice, just as a finite-length document is comprised of a finite set of instantiated topics, a finite-length time series is described by a limited set of dynamic regimes. This limited set might be related yet distinct from the set of dynamic regimes present in another time series. For example, in the case of the exercise routines, perhaps one observed individual performs jumping jacks, side twists, and arm circles, whereas another individual performs jumping jacks, arm circles, squats, and toe touches. In a similar fashion to the feature-based approach of the focused topic model described in Section 2.3, one can employ the beta-Bernoulli process to directly capture a sparse set of associations between time series and dynamic regimes.

The beta process framework provides a more abstract and flexible representation of Bayesian nonparametric mixed membership in a collection of time series. Globally, the collection of time series are still described by a shared library of infinitely many possible dynamic regimes. Individually, however, a given time series is modeled as exhibiting some sparse subset of these dynamic regimes.

More formally, Fox et al. (2010) propose the following specification: each time series  is endowed with an infinite-dimensional feature vector , with indicating the inclusion of dynamic regime  in the membership of time series . The feature vectors for the collection of time series are coupled under a common beta process measure . In this scenario, one can think of as defining coin-flipping probabilities for the global collection of dynamic regimes. Each feature vector is implicitly modeled by a Bernoulli process draw with . That is, the beta-process-determined coins are flipped for each dynamic regime and the set of resulting heads indicate the set of selected features (i.e., via ).

The beta process specification allows flexibility in the number of total and time-series-specific dynamic regimes, and encourages time series to share similar subsets of the infinite set of possible dynamic regimes. Intuitively, the shared sparsity in the feature space arises from the fact that the total sum of coin-tossing probabilities is finite and only certain dynamic regimes have large probabilities. Thus, certain dynamic regimes are more prevalent amongst the time series, though the resulting set of dynamic regimes clearly need not be identical. For example, the lower subfigure in Figure 4(a) illustrates a collection of feature vectors drawn from this process.

To limit each time series to solely switch between its set of selected dynamic regimes, the feature vectors are used to form feature-constrained transition distributions:


Again, we use to denote a Dirichlet distribution defined over the finite set of dimensions specified by with hyperparameters given by the corresponding subset of . Here, the hyperparameter places extra expected mass on the component of corresponding to a self-transition , analogously to the sticky hyperparameter of the sticky HDP-HMM (Fox et al., 2011b). This construction implies that has only a finite number of non-zero entries . As an example, if


with distributed according to a 4-dimensional Dirichlet distribution. Pictorially, the generative process of the feature-constrained transition distributions is similar to that illustrated in Figure 5.

Although the methodology described thus far applies equally well to HMMs and other Markov switching processes, Fox et al. (2010) focus on the AR-HMM of Equation (15). Specifically, let represent the observed value of the th time series at time , and let denote the latent dynamical regime. Assuming an order- AR-HMM, we have


where . Recall that each of the defines a different VAR() dynamic regime and the feature-constrained transition distributions restrict time series to transition amongst dynamic regimes (indexed at time by ) for which it has membership, as indicated by its feature vector .

Figure 7: Graphical model of the BP-AR-HMM. The beta process distributed measure is represented by its masses and locations , as in Equation (8). The features are then conditionally independent draws , and are used to define feature-constrained transition distributions . The switching VAR dynamics are as in Equation (24).

Conditioned on the set of feature vectors coupled via the beta-Bernoulli process hierarchy, the model reduces to a collection of switching VAR processes, each defined on the finite state space formed by the set of selected dynamic regimes for that time series. Importantly, the beta-process-based featural model couples the dynamic regimes exhibited by different time series. Since the library of possible dynamic parameters is shared by all time series, posterior inference of each parameter set relies on pooling data amongst the time series that have . It is through this pooling of data that one may achieve more robust parameter estimates than from considering each time series individually.

The resulting model is termed the BP-AR-HMM, with a graphical model representation presented in Figure 7. The overall model specification is summarized as333One could consider alternative specifications of , such as in the focused topic model of Equation (11) where each element is an independent random variable. Note that Fox et al. (2010) treat as random.:


Fox et al. (2010) apply the BP-AR-HMM to the analysis of multiple motion capture (MoCap) recordings of people performing various exercise routines, with the goal of jointly segmenting and identifying common dynamic behaviors amongst the recordings. In particular, the analysis examined six recordings taken from the CMU database (CMU, 2009), three from Subject 13 and three from Subject 14. Each of these routines used some combination of the following motion categories: running in place, jumping jacks, arm circles, side twists, knee raises, squats, punching, up and down, two variants of toe touches, arch over, and a reach out stretch.

The resulting segmentation from the joint analysis is displayed in Figure 8. Each skeleton plot depicts the trajectory of a learned contiguous segment of more than two seconds, and boxes group segments categorized under the same behavior label in the posterior. The color of the box indicates the true behavior label. From this plot we can infer that although some true behaviors are split into two or more categories (“knee raises” [green] and “running in place” [yellow])444The split behaviors shown in green and yellow correspond to the true motion categories of knee raises and running, respectively, and the splits can be attributed to the two subjects performing the same motion in a distinct manner., the BP-AR-HMM is able to find common motions (e.g., six examples of “jumping jacks” [magenta]) while still allowing for various motion behaviors that appeared in only one movie (bottom left four skeleton plots.)

Figure 8: Each skeleton plot displays the trajectory of a learned contiguous segment of more than two seconds, bridging segments separated by fewer than 300 msec. The boxes group segments categorized under the same behavior label, with the color indicating the true behavior label (allowing for analysis of split behaviors). Skeleton rendering done by modifications to Neil Lawrence’s Matlab MoCap toolbox (Lawrence, 2009).

The key characteristic of the BP-AR-HMM that enables the clear identification of shared versus unique dynamic behaviors is the fact that the model takes a feature-based approach. The true feature matrix and BP-AR-HMM estimated matrix, averaged over a large collection of MCMC samples, are shown in Fig. 9. Recall that each row represents an individual recording’s feature vector drawn from a Bernoulli process, and coupled under a common beta process prior. The columns indicate the possible dynamic behaviors (truncated to a finite number if no assignments were made thereafter.)

Figure 9: Feature matrices associated with the true MoCap sequences (left) and BP-AR-HMM estimated sequences over iterations 15,000 to 20,000 of an MCMC sampler (right). Each row is an individual recording and each column a possible dynamic behavior. The white squares indicate the set of selected dynamic behaviors.

4 Related Bayesian and Bayesian Nonparametric Time Series Models

In addition to the regime-switching models described in this article, there is large and growing literature on Bayesian parametric and nonparametric time series models, many of which also have interpretations as mixed membership models. We overview some of this literature in this section, aiming not to cover the entirety of related literature but simply to highlight three main themes: (i) non-homogeneous mixed membership models, and relatedly, time-dependent processes, (ii) other HMM-based models, and (iii) time-independent mixtures of autoregressions.

4.1 Non-Homogeneous Mixed Membership Models

Time-Varying Topic Models

The documents in a given corpus sometimes represent a collection spanning a wide range of time. It is likely that the prevalence and popularity of various topics, and words within a topic, change over this time period. For example, when analyzing scientific articles, the set of scientific questions being addressed naturally evolves. Likewise, within a given subfield, the terminology similarly develops—perhaps new words are created to describe newly discovered phenomena or other words go out of vogue.

To capture such changes, Blei and Lafferty (2006) proposed a dynamic topic model. This model takes the general framework of LDA, but specifies a Gaussian random walk on a set of topic-specific word parameters


and document-specific topic parameters


The topic-specific word distribution arises via . For the topic distribution, Blei and Lafferty (2006) specify and transform to . This formulation provides a non-homogeneous mixed membership model since the membership weights (i.e., topic weights) vary with time.

The formulation of Blei and Lafferty (2006) assumes a discrete, evenly spaced corpora of documents. Often, however, documents are observed at non-evenly and potentially finely sampled time points. Wang et al. (2008) explore a continuous time extension by modeling the evolution of as Brownian motion. As a simplifying assumption, the authors do not consider evolution of the global topic proportions .

Time-Dependent Bayesian Nonparametric Processes

For Bayesian nonparametric time-varying topic modeling, Srebro and Roweis (2005) propose a time-dependent Dirichlet process. The Dirichlet process allows for an infinite set of possible topics, in a similar vein to the motivation in HDP-LDA. Importantly, however, this model does not assume a mixed membership formulation and instead takes each document to be hard-assigned to a single topic. The proposed time-dependent Dirichlet process models the changing popularity of various topics, but assumes that the topic-specific word distributions are static. That is, the Dirichlet process probability measures have time-varying weights, but static atoms.

More generally, there is a growing interest in time-dependent Bayesian nonparmetric processes. The dependent Dirichlet process was originally proposed by MacEachern (1998). A substantial focus has been on evolving the weights of the random discrete probability measures. Recently, Griffin and Steel (2011) examine a general class of autoregressive stick-breaking process, and Mena et al. (2011) study stick-breaking processes for continuous-time modeling. Taddy (2010) considers an alternative autoregressive specification for Dirichlet process stick-breaking weights, with application to modeling the changing rate function in a dynamic spatial Poisson process.

4.2 Hidden-Markov-Based Bayesian Nonparametric Models

A number of other Bayesian nonparametric models have been proposed in the literature that take as their point of departure a latent Markov switching mechanism. Both the infinite factorial HMM (Van Gael et al., 2008) and the infinite hierarchical HMM (Heller et al., 2009) provide Bayesian nonparametric priors for infinite collections of latent Markov chains. The infinite factorial HMM provides a distribution on binary Markov chains via a Markov Indian buffet process. The implicitly defined time-varying infinite-dimensional binary feature vectors are employed in performing blind source separation (e.g., separating an audio recordings into a time-varying set of overlapping speakers.) The infinite hierarchical HMM also employs an infinite collection of Markov chains, but the evolution of each depends upon the chain above. Instead of modeling binary Markov chains, the infinite hierarchical HMM examines finite multi-class state spaces.

Another method that is based on a finite state space is that of Taddy and Kottas (2009). The proposed model assumes that each HMM state defines an independent Dirichlet process regression. Extensions to non-homogenous Markov processes are considered based on external covariates that inform the latent state.

In Saeedi and Bouchard-Côté (2012), the authors propose a hierarchical gamma-exponential process for modeling recurrent continuous time processes. This framework provides a continuous-time analog to the discrete-time sticky HDP-HMM.

Instead of Markov-based regime-switching models that capture repeated returns to some (possibly infinite) set of dynamic regimes, one can consider changepoint methods in which each transition is to a new dynamic regime. Such methods often allow for very efficient computations. For example, Xuan and Murphy (2007) base such a model on the product partition model555A product partition model is a model in which the data are assumed independent across some set of unknown partitions (Hartigan, 1990; Barry and Hartigan, 1992). The Dirichlet process is a special case of a product partition model. framework to explore changepoints in the dependency structure of multivariate time series, harnessing the efficient dynamic programming techniques of Fearnhead (2006). More recently, Zantedeschi et al. (2011) explore a class of dynamic product partition models and online computations for predicting movements in the term structure of interest rates.

4.3 Bayesian Mixtures of Autoregressions

In this article, we explored two forms of switching autoregressive models: the HDP-AR-HMM and the BP-AR-HMM. Both models assume that the switches between autoregressive parameters follow a discrete-time Markov process. There is also substantial literature on nonlinear autoregressive modeling via mixtures of autoregressive processes, where the mixture components are independently selected over time.

Lau and So (2008) consider a Dirichlet process mixture of autoregressions. That is, at each time step the observation is modeled as having been generated from one of an unbounded collection of autoregressive processes, with the mixing distribution given by a Dirichlet process. A variational approach to Dirichlet process mixtures of autoregressions with unknown orders has recently been explored in Morton et al. (2011). Wood et al. (2011) aim to capture the idea of structural breaks by segmenting a time series into contiguous blocks of observations and assigning each segment to one of a finite mixture of autoregressive processes; implicitly, all observations are associated with a given mixture component. Key to the formulation is the inclusion of time-varying mixture weights, leading to a nonstationary process as in Section 4.1.

As an alternative formulation that captures Markovianity, but not directly in the latent mixture component, Müller et al. (1997) consider a model in which the probability of choosing a given autoregressive component is modeled via a kernel based on the previous set of observations (and potential covariates). The maximal set of mixture components is fixed, with the associated autoregressive parameters taken to be draws from a Dirichlet process, implying that only will take distinct values.

5 Discussion

In this article, we have discussed a variety of time series models that have interpretations in the mixed membership framework. Mixed membership models are comprised of three key components: entities, attributes, and data. What differs between mixed membership models is the type of the data associated with each entity, and how the entities are assigned membership with the set of possible attributes. Abstractly, in our case each time series is an entity that has membership with a collection of dynamic regimes, or attributes. The partial memberships are determined based on the temporally structured observations, or data, for the given time series. This structured data is in contrast to the typical focus of mixed membership models on exchangeable collections of data per entity (e.g., a bag of words representation of a document’s text.)

Throughout the article, we have focused our attention on the class of Markov switching processes, and further restricted our exposition to Bayesian parametric and nonparametric treatments of such models. The latter allows for an unbounded set of attributes by modeling processes with Markov transitions between an infinite set of dynamic regimes. For the class of Markov switching processes, the mixed membership of a given time series is captured by the time-series-specific set of Markov transition distributions. Examples include the classical hidden Markov model (HMM), autoregressive HMM, and switching state-space model.

In mixed membership modeling, one typically has a group of entities (e.g., a corpus of documents) and the goal is to allow each entity to have a unique set of partial memberships amongst a shared collection of attributes (e.g., topics). Through such modeling techniques, one can efficiently and flexibly share information between the data sources associated with the entities. Motivated by such goals, in this article we explored a nontraditional treatment of time series analysis by examining models for collections of time series. We proposed a Bayesian nonparametric model for multiple time series based on ideas analogous to Dirichlet-multinomial modeling of documents. We also reviewed a Bayesian nonparametric model based on a beta-Bernoulli framework that directly allows for sparse association of time series with dynamic regimes. Such a model enables decoupling the presence of a dynamic regime from its prevalence.

The discussion herein of time series analysis from a mixed membership perspective has been previously neglected, and leads to interesting ideas for further development of time series models.


  • Barry and Hartigan (1992) Barry, D. and J. A. Hartigan (1992). Product partition models for change point problems. the Annals of Statistics 20(1), 260–279.
  • Beal et al. (2002) Beal, M., Z. Ghahramani, and C. Rasmussen (2002). The infinite hidden Markov model. In Advances in Neural Information Processing Systems, Volume 14, pp. 577–584.
  • Blei and Lafferty (2006) Blei, D. M. and J. Lafferty (2006). Dynamic topic models. In

    Proc. International Conference on Machine Learning

  • Blei et al. (2003) Blei, D. M., A. Y. Ng, and M. I. Jordan (2003). Latent Dirichlet allocation. The Journal of Machine Learning Research 3, 993–1022.
  • CMU (2009) CMU (2009). Carnegie Mellon University graphics lab motion capture database. http://mocap.cs.cmu.edu/.
  • Fearnhead (2006) Fearnhead, P. (2006).

    Exact and efficient Bayesian inference for multiple changepoint problems.

    Statistics and Computing 16(2), 203–213.
  • Fox et al. (2008) Fox, E. B., E. B. Sudderth, M. I. Jordan, and A. S. Willsky (2008, July). An HDP-HMM for systems with state persistence. In Proc. International Conference on Machine Learning.
  • Fox et al. (2010) Fox, E. B., E. B. Sudderth, M. I. Jordan, and A. S. Willsky (2010). Sharing features among dynamical systems with beta processes. In Advances in Neural Information Processing Systems, Volume 22.
  • Fox et al. (2011a) Fox, E. B., E. B. Sudderth, M. I. Jordan, and A. S. Willsky (2011a). Bayesian nonparametric inference of switching dynamic linear models. IEEE Transactions on Signal Processing 59(4), 1569–1585.
  • Fox et al. (2011b) Fox, E. B., E. B. Sudderth, M. I. Jordan, and A. S. Willsky (2011b). A sticky HDP-HMM with application to speaker diarization. Annals of Applied Statistics 5(2A), 1020–1056.
  • Griffin and Steel (2011) Griffin, J. E. and M. F. J. Steel (2011). Stick-breaking autoregressive processes. Journal of Econometrics 162, 383–396.
  • Griffiths and Ghahramani (2005) Griffiths, T. L. and Z. Ghahramani (2005). Infinite latent feature models and the Indian buffet process. Gatsby Computational Neuroscience Unit, Technical Report #2005-001.
  • Hartigan (1990) Hartigan, J. A. (1990). Partition models. Communications in Statistics–Theory and Methods 19(8), 2745–2756.
  • Heller et al. (2009) Heller, K. A., Y. W. Teh, and D. Gorur (2009). The infinite hierarchical hidden Markov model. In

    Proc. International Conference on Artificial Intelligence and Statistics

  • Hjort (1990) Hjort, N. L. (1990). Nonparametric Bayes estimators based on beta processes in models for life history data. Annals of Statistics 18, 1259–1294.
  • Kim (1994) Kim, C.-J. (1994). Dynamic linear models with Markov-switching. Journal of Econometrics 60, 1–22.
  • Kim (1999) Kim, Y. (1999). Nonparametric Bayesian estimators for counting processes. The Annals of Statistics 27, 562–588.
  • Lau and So (2008) Lau, J. W. and M. K. P. So (2008). Bayesian mixture of autoregressive models. Computational Statistics & Data Analysis 53(1), 38–60.
  • Lawrence (2009) Lawrence, N. (2009). MATLAB motion capture toolbox. http://www.cs.man.ac.uk/ neill/mocap/.
  • Lennox et al. (2010) Lennox, K. P., D. B. Dahl, M. Vannucci, R. Day, and J. W. Tsai (2010). A Dirichlet process mixture of hidden Markov models for protein structure prediction. The Annals of Applied Statistics 4(2), 916–942.
  • MacEachern (1998) MacEachern, S. N. (1998). Dependent nonparametric processes. In Proc. Bayesian Statististical Science Section, pp. 50–55.
  • Mena et al. (2011) Mena, R. H., M. Ruggiero, and S. G. Walker (2011). Geometric stick-breaking processes for continuous-time Bayesian nonparametric modeling. Journal of Statistical Planning and Inference 141(9), 3217–3230.
  • Morton et al. (2011) Morton, K. D., P. A. Torrione, and L. M. Collins (2011, June). Variational bayesian learning for mixture autoregressive models with uncertain-order. IEEE Transactions on Signal Processing 59(6), 2614–2627.
  • Müller et al. (1997) Müller, P., M. West, and S. N. MacEachern (1997). Bayesian models for non-linear autoregressions. Journal of Time Series Analysis 18(6), 593–614.
  • Paisley et al. (2011) Paisley, J., D. M. Blei, and M. I. Jordan (2011). The stick-breaking construction of the beta process as a Poisson process. Technical Report 1109.0343, arXiv.
  • Paisley et al. (2010) Paisley, J., A. Zaas, C. W. Woods, G. S. Ginsburg, and L. Carin (2010). A stick-breaking construction of the beta process. In Proc. International Conference on Machine Learning.
  • Pitman (2002) Pitman, J. (2002). Combinatorial stochastic processes. Technical Report 621, U.C. Berkeley Department of Statistics.
  • Pritchard et al. (2000) Pritchard, J. K. adn Stephens, M., N. A. Rosenberg, and P. Donnelly (2000). Association mapping in structured populations. The American Journal of Human Genetics 67(1), 170–181.
  • Qi et al. (2007) Qi, Y., J. Paisley, and L. Carin (2007, November). Music analysis using hidden Markov mixture models. IEEE Transactions on Signal Processing 55(11), 5209–5224.
  • Rabiner (1989) Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286.
  • Saeedi and Bouchard-Côté (2012) Saeedi, A. and A. Bouchard-Côté (2012). Priors over recurrent continuous time processes. In Advances in Neural Information Processing Systems, Number 24.
  • Sethuraman (1994) Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica 4, 639–650.
  • Srebro and Roweis (2005) Srebro, N. and S. Roweis (2005, March). Time-varying topic models using dependent Dirichlet processes. UTML, TR #2005-003.
  • Taddy and Kottas (2009) Taddy, M. and A. Kottas (2009). Markov switching Dirichlet process mixture regression. Bayesian Analysis 4(4), 793–816.
  • Taddy (2010) Taddy, M. A. (2010). Autoregressive mixture models for dynamic spatial Poisson processes: Application to tracking intensity of violent crimes. Journal of the American Statistical Association 105(492), 1403–1417.
  • Teh et al. (2006) Teh, Y. W., M. I. Jordan, M. J. Beal, and D. M. Blei (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association 101(476), 1566–1581.
  • Thibaux and Jordan (2007) Thibaux, R. and M. I. Jordan (2007). Hierarchical beta processes and the Indian buffet process. In Proc. International Conference on Artificial Intelligence and Statistics, Volume 11.
  • Van Gael et al. (2008) Van Gael, J., Y. Saatci, Y. W. Teh, and Z. Ghahramani (2008, July). Beam sampling for the infinite hidden Markov model. In Proc. International Conference on Machine Learning.
  • Wang et al. (2008) Wang, C., D. M. Blei, and D. Heckerman (2008). Continuous time dynamic topic models. In Proc. Uncertainty in Artificial Intelligence.
  • Williamson et al. (2010) Williamson, S., C. Wang, K. A. Heller, and D. M. Blei (2010). The IBP-compound Dirichlet process and its application to focused topic modeling. In Proc. International Conference on Machine Learning.
  • Wood et al. (2011) Wood, S., O. Rosen, and R. Kohn (2011). Bayesian mixtures of autoregressive models. Journal of Computational and Graphical Statistics 20(1), 174–195.
  • Xuan and Murphy (2007) Xuan, X. and K. Murphy (2007, June). Modeling changing dependency structure in multivariate time series. In Proc. International Conference on Machine Learning.
  • Yau et al. (2011) Yau, C., O. Papaspiliopoulos, G. O. Roberts, and C. Holmes (2011). Bayesian non-parametric hidden Markov models with applications in genomics. Journal of the Royal Statistical Society, Series B 73(1), 37–57.
  • Zantedeschi et al. (2011) Zantedeschi, D., P. L. Damien, and N. G. Polson (2011). Predictive macro-finance with dynamic partition models. Journal of the American Statistical Association 106(494), 427–439.