Prediction and Generation of Binary Markov Processes: Can a Finite-State Fox Catch a Markov Mouse?

Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.



page 7

page 10


Equivalence of History and Generator Epsilon-Machines

Epsilon-machines are minimal, unifilar presentations of stationary stoch...

On structural parameter estimation of the Markov Q-process

In the paper we consider a stochastic model which called Markov Q-proces...

Divergent Predictive States: The Statistical Complexity Dimension of Stationary, Ergodic Hidden Markov Processes

Even simply-defined, finite-state generators produce stochastic processe...

Generic identification of binary-valued hidden Markov processes

The generic identification problem is to decide whether a stochastic pro...

Strong and Weak Optimizations in Classical and Quantum Models of Stochastic Processes

Among the predictive hidden Markov models that describe a given stochast...

Nearly Maximally Predictive Features and Their Dimensions

Scientific explanation often requires inferring maximally predictive fea...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Imagine a mouse being chased by a fox. Survival suggests that the mouse should generate a path that is difficult for the fox to predict. We might imagine that the mouse brain is designed or trained to maximize the fox’s difficulty and, similarly, that the fox somehow has optimized the task of predicting the mouse’s path. Are these two tasks actually distinct? If so, do there exist escape paths that are easier to generate than predict? Every animal has limited computational resources and we might reasonably suppose that the mouse has fewer than the fox. Given that mice clearly continue to survive, we can ask whether this disparity in resources exists in tension with the disparity in task-complexity—path-generation versus path-prediction.

In lieu of mouse paths, we consider the space of discrete stationary stochastic processes—objects consisting of temporal sequences that span the range from perfectly ordered to completely random. We then frame resource questions quantitatively via hidden Markov model (HMM) representations of these processes. We focus on two particular HMM representations of any given process: the minimal predictive HMM—its computational mechanics’ [1]—and its minimal generative HMM. We then find two primary measures of memory resource: —defined as the ’s state-entropy—quantifies the cost of prediction, while —the state entropy of the generative machine—quantifies the cost of generation. Introduced over two and a half decades ago, the predictive representation is well studied and can be constructed for arbitrary processes [2]. The generative machine offers more challenges, as it involves a nonconvex constrained minimization over high-dimensional spaces. While there are several known bounds on and restrictions on the construction of generative HMMs [3, 4, 5, 6], they have received significantly less attention than the predictive case and, as a consequence, are markedly less well understood.

The following presents the first construction of the minimal generators for an arbitrary stationary binary Markov process. This allows for the analytic calculation of , as well as other properties of generative models. These models elucidate the differences between the tasks of generation and prediction. The techniques introduced here should also lead to minimal generators for other process classes.

Ii Models

We represent stochastic processes using edge-emitting (Mealy) hidden Markov models (HMMs). Such a representation is specified by a set of states, a set of output symbols, a set of labeled transition matrices, and a stationary distribution over states. We consider stationary processes so, assuming the state transition structure is mixing, the invariant state distribution is unique and is therefore redundantly determined from the labeled transition matrices.

Clearly, not every HMM corresponds to any given process. If a model is to correspond to a particular process, its states must yield conditional independence between the past and future. That is, the past and future random variable chains yielded by a model must be rendered independent by the model’s current state ; information-theoretically, we have: . The (unconditioned) mutual information between past and future is called the excess entropy. Among other uses, it is the amount of uncertainty about the future one may reduce through knowledge of the past. Intuitively then, the state of a correct model must “capture”  bits of information; see Fig. 1. (For brevity, the following suppresses infinite variable indexes.)

Figure 1: I-diagram [7] of the Markov chain [8]. The state of a generating model shields the past and future, rendering them conditionally independent. This is reflected by the overlap between the past and future being entirely captured (contained within) the system state entropy . The past and future further segment into the crypticity , gauge information , and oracular information ; quantities whose interpretation is explored further in Ref. [9].

There are an infinity of models for a given stochastic process. Depending on context, certain models will have merits above those of others. The ability to predict is one such context.

Iii Predictive Models

What is prediction

? Loosely speaking, prediction has to do with a relation between two variables, one which we think of as input and the other as output. In our context of stochastic processes, the input is the past and the output is the future . By prediction we mean: Given some instance of the past , the task is to yield the exact conditional probability distribution

for any length .

iii.1 Construction

The minimal predictive model of a process is known as its and its construction is straightforward. The theory of computational mechanics provides a framework for the detailed characterization of in topological and information-theoretic terms


The kernel underlying this construction is the causal equivalence relation . This is a relation over the set of semi-infinite pasts such that two pasts, and , belong to the same equivalence class if their conditional futures agree:

Each equivalence class is a state of the system, encapsulating in minimal form the degree to which the past influences the future. Thus, we refer to the classes as causal states and denote by the causal state at time . The memory required by the to implement the act of prediction is —the statistical complexity. 111This notion of memory applies in the ensemble setting. Single-shot or single-instance memory is also of interest and is studied in .

Then, transitions between these states follow directly from the equivalence relation:

As previously stated, the excess entropy is the amount of information shared between past and future. The causal equivalence relation induces a particular random variable that “captures” . Importantly, is not itself the entropy of a random variable. Thus, the causal-state random variable cannot generally be of size bits. We might then think of the difference , also known as the crypticity, as the predictive overhead [10]. It is an interesting fact that a nonzero predictive overhead is generic in the space of all processes.

iii.2 Binary Markov Processes

Let us now narrow our focus and construct the predictive models for the particular class of binary Markov processes. More specifically, we consider all stationary stochastic processes over the symbol set with the Markov property:

Applying the causal equivalence relation, we find that the causal state is completely determined by the previous single symbol, a simple consequence of the process’ Markovity. This leads directly to the in Fig. 2.

Figure 2: for all binary Markov processes. Cases with or or are single-state that are minimal in all respects: predictive or generative, entropic or dimensional.

Its stationary state distribution is:

The informational properties of this class of processes—entropy rate, excess entropy, and statistical complexity—can be stated in closed form:

where denotes Shannon’s binary entropy function [8]. The simple relation among these measures follows from the fact that any (nontrivial) binary Markov process is also equivalent to a spin chain—a restricted class of Markov chains [10].

This class of binary Markov processes spans a variety of structured processes, summarized in Fig. 3. At the extremes of either or , we have a period- (constant) process. If either or , we have Golden Mean Processes, where s or s, respectively, occur in isolation. If , the process loses its dependence on the prior symbol and it becomes a biased coin.

Figure 3: Process space spanned by binary Markov processes. When either or , the process is constant, repeating s or s, respectively. In the limit the process is nonergodic, realizing only one or the other of the two constant processes. When either or , the expressed processes are known as Golden Mean Processes, characterized by isolated s or s, respectively. Along the line the process is a biased coin. Along the line , the process is known as a perturbed coin, where states and each represent an oppositely biased coin and the process switches between the two biases based on the symbols just emitted.

Iv Generative Models

Let’s now return to our original topic and describe the second type of process representation—generative models. The only requirement of a generative model is that it be able to correctly sample from the distribution over futures. More specifically, we require that, given any instance of the past, the generative model yields a next symbol with the same probability distribution as specified by the process.

Note that, on the one hand, it may seem obvious that prediction subsumes generation. On the other, it is not so obvious how these two tasks might prefer different mechanisms.

Like the causal state, a generative state must also render past and future conditionally independent. Importantly, as a consequence of the causal equivalence relation are unifilar which, when paired with their minimality, implies that the causal states are functions of the prior observables. Generative models, however, need not have this restriction. Consequently, a given sequence of past symbols (finite or semi-infinite) may induce more than one generative state.

Generative models are much less well understood than their predictive cousins. This is due in large part to the lack of constructive methods for working with and otherwise constructing them. This is why our results here, though addressing only on a relatively simple class of processes, mark a substantial step forward.

Figure 4: Löhr model: A three-state HMM that generates the same process as that in Fig. 2 when . Its principle interest arises since it has a smaller state entropy than the for a range of values: .

V Löhr example

Let us now focus on a subclass of binary Markov processes, those for which ; refer to the orange line in Fig. 3. Reference [4] offers up a three-state HMM generator for this class, which we refer to as the Löhr model; see Fig. 4. We see from the HMM that when probability is near , the process is nearly independent, identically distributed (IID). An IID process has only a single causal state and therefore zero statistical complexity, . However, for any deviation from , the statistical complexity is a full bit, . Why is it that a generator of a nearly IID process—that is, a nearly memoryless process—still needs a full bit of memory?

The motivation for constructing this three-state model is that it might concentrate the IID behavior into a single state and use the other states only for those infrequent deviations that “make up the difference”. And so, the state-entropy may be reduced even though there are three states instead of two. A priori it is not obvious that it is possible to yield the correct process in this construction. It is, however, straightforward to check that the Löhr model produces the correct conditional statistics. It is a generator of the processes. Note that in general it is sufficient to check these probabilities for all words of length where [11, Corollary 4.3.9].

We find that the Löhr model has the stationary state distribution:

As noted, the statistical complexity for ’s entire range. The state entropy of the Löhr model is smaller than for the range of values . Importantly, this is sufficient to show that prediction and generation are generally different tasks—they have different optimal solutions. This was previously shown in Ref. [4]. However, the question remained whether or not the Löhr model is minimal. Surprisingly, though subsequent works on generative complexity have appeared, to the best of our knowledge this example is the only HMM published that is entropically smaller than the (finite-state) .

We will now construct the provably minimal generator for these processes. Further, we extend our analysis, not only to the range , but to the entire domain of Fig. 3.

Figure 5: Parametrized HMM for the complete set of -state machines that generate the space of binary Markov chain processes when and and we assume . A second isomorphic class follows from the assumption .

v.1 Bounds

Recall that, for some , the Löhr model is entropically smaller than the and it achieves this while having three states instead of two. The important point is that minimization of entropy in the generative context does not limit the number of states in the same way as in the predictive one. (Recall that among predictive models, the is minimal in both entropy and state number [2].)

Figure 6: Two-parameter process space of binary Markov processes: Consider three points within this space. For each, there is a two-parameter model space. Within each model space we examine the model’s state entropy and identify the global minima. We exhibit the corresponding HMMs. Topological changes in these minimal HMMs induce a three-region partition on process space.

A recent result shows that the maximum number of states in an entropically minimal channel is , where and are the channel input and output processes [12]. Since a generative model is a form of communication channel from the past to the future, we find that the number of states of the minimal generative model is bounded by . Of course, this result is useless on its own: and are generically infinite.

This bound can be made practical by combining the data processing inequality for exact common information  [12] with the existence of the following two Markov chains [13]:

We denote forward- and reverse-time causal states and , respectively. Combined, these tell us that . Therefore, the bound can be tightened to . This is a particularly helpful application of causal states.

v.2 Binary Markov Chains

In the particular case of processes represented by binary Markov chains, the reverse process is also represented by a binary Markov chain. And so, both and . From the above bounds, we find that . Closely following the proof in Ref. [12], one can then show that no three-state representation is minimal. And, since a single state model can only represent IID processes, this leaves only models with two-states as the possible minimal representations.

We begin with the assumption that an observation maps stochastically to a state , which then stochastically maps to a symbol . Constraining this pair of channels to produce observations and consistent with the binary Markov chain yields the parametrized hidden Markov model found in Fig. 5. (Appendix III gives the background calculations.)

For each point in the binary Markov process-space, we now have a two-parameter model-space of HMMs, specified by . The constraint that conditional probabilities be between zero and one restricts our model-space parameters to a rectangle and . One can now compute the state entropy within this constrained model-space and identify the minima.

Since the entropy is concave in and and the allowable regions in -space are convex (rectangles), it is sufficient to search for local minima along the boundary.

Figure 6 illustrates this for three different points in process space. We find that at each of the points and , there is a single global minimum. For the point , we find that there are two minima equivalent in value, but corresponding to nonisomorphic HMMs. Both representations are biased toward producing a periodic sequence, with fluctuations interjected at different phases of the period.

In this way, one can discover the minimal generator for any binary Markov chain. Examining these minimal topologies at each point, we find that process-space is divided into three triangular regions with topologically-distinct generators. This is a somewhat surprising contrast with the fact that this model class requires only one predictive topology.

Let us briefly return to the restricted process previously considered—the perturbed coin. We may now quantitatively compare the three state-entropies of interest. In Fig. 7 we see that the statistical complexity everywhere, but at , where it vanishes, . The Löhr model’s state-entropy falls below , but only for a subset of values. However, the generative complexity (a smooth function) is everywhere less than both and . (The generative models for and are shown above.) This shows that the proposed Löhr model is not the generative model for any value of .

As implied by the conditional independence requirement, the excess entropy remains a lower bound on each of these state-entropies. Löhr [4] constructed a tighter lower bound, denoted , on any model of the perturbed coin. We see that is slightly larger than this bound. It may be useful to generalize this lower bound for other processes.

Figure 7: State entropy of various models of the perturbed coin: The excess entropy is the amount of information any model of a process must possess. A stronger lower bound claimed by Löhr is also plotted. Entropies of the three models: for the , for Löhr’s model, and for the generative model. While is less than for some values of , is less than both and everywhere.

The minimal generators are defined over all of -space. We can compare the cost of prediction with the cost of generation and the information necessarily captured by a model—the excess entropy . This comparison is seen in Fig. 8.

Focusing on the upper two panels of Fig. 8, we see that both and display symmetry. Furthermore, has a discontinuous derivative along this line of symmetry, but only in the southwest (SW).

For , the line is special in that this lines marks a causal-state collapse—two causal states merge into one under the equivalence relation. For , however, this line marks a qualitative change in behavior (SW versus NE). Since the generative complexity is lower semi-continuous [3], we know that a predictive gap must exist around this line.

The lower two panels of Fig. 8 suggest that the costs of generation and of prediction may have different causes. The parameters for which is high are disjoint from those where is high. is high when and are correlated (near the - symmetry line), but only for . In the other half of parameter space, is high when and are anti-correlated and away from the causal collapse. In contrast, is high exclusively near the line of causal collapse.

Figure 8: State complexity of the two canonical models: and generative machine. The predictive overhead, , quantifies the information required to enable prediction above and beyond generation. The generative overhead, , quantifies the amount of information a model of a process requires beyond that minimally required by the observable correlations.

Vi Conclusion

We presented the minimal generators of binary Markov stochastic processes. Curiously, the literature appears to contain no other examples of generative models, for processes with finite-state . And so, our contribution here is a substantial step forward. It allows us to begin to understand the difference between prediction and generation through direct calculation. It also opens these new models to analysis by a host of previously developed techniques including the information diagrams presented here.

To put the results in a larger setting, we note that HMMs have found application in many diverse settings, ranging from speech recognition to bioinformatics. And so, there are many reasons to care about the states and information-theoretic properties of these models, some obvious and some not. It is common to imbue a state with greater explanatory power than, say, a random variable that merely exhibits the correct correlations for the observables at hand. For instance, we may seek independent means of determining the state. Whether or not this is appropriate, the fact remains that the different tasks of prediction and generation are associated with different kinds of state, each with different kinds of explanatory usefulness. This distinction seems to us to be rarely if ever made in HMM applications.

The concept of model state is central, for example, in model selection. A simple and common method for selecting one model over another is through application of a penalty related to the number of states (or entropy thereof) [14]

. Since the predictive model will never have a lower entropy than the corresponding generative one, an entropic penalty should never yield the predictive model, however a state-number penalty might. Similarly, in model parameter inference, if one distinguishes between the predictive and generative classes, the maximum likelihood estimated parameters will differ between the two classes.

Finally, we close by drawing out the consequences for fundamental physics. Understanding states bears directly on thermodynamics. Landauer’s Principle states that erasing memory comes at a minimum, unavoidable cost—a heat dissipation proportional to the size of the memory erased [15]. One can consider HMMs as abstract representations of processes with memory (the state) that must be modified or erased as time progresses. Applying Landauer’s Principle assigns thermodynamic consequences to the HMM time evolution. Which HMM (and corresponding states) is appropriate, though? We now see that prediction and generation, two very natural tasks for a thermodynamic system to perform, actually deliver two different answers. It is important to understand how physical circumstances relate to this choice of task—it will be expressed in terms of heat.


We thank the Santa Fe Institute for its hospitality during visits, where JPC is an External Faculty member. This material is based upon work supported by, or in part by, John Templeton Foundation grant 52095, Foundational Questions Institute grant FQXi-RFP-1609, and the U. S. Army Research Laboratory and the U. S. Army Research Office under contracts W911NF-13-1-0390 and W911NF-13-1-0340. JR was funded by the 2016 NSF Research Experience for Undergraduates program.


  • [1] J. P. Crutchfield. Between order and chaos. Nature Physics, 8(1):17–24, 2012.
  • [2] C. R. Shalizi and J. P. Crutchfield. Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys., 104:817–879, 2001.
  • [3] W. Löhr. Predictive models and generative complexity. J. Systems Sci. Complexity, 25(1):30–45, 2012.
  • [4] W. Löhr and N. Ay. Non-sufficient memories that are sufficient for prediction. In International Conference on Complex Sciences, pages 265–276. Springer, 2009.
  • [5] W. Löhr and N. Ay. On the generative nature of prediction. Adv. Complex Sys., 12(02):169–194, 2009.
  • [6] A. Heller. On stochastic processes derived from markov chains. Ann. Mat. Stat., 36:1286–1291, 1965.
  • [7] R. W. Yeung. A new outlook on Shannon’s information measures. IEEE Trans. Info. Th., 37(3):466–474, 1991.
  • [8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, second edition, 2006.
  • [9] J. P. Crutchfield, C. J. Ellison, J. R. Mahoney, and R. G. James. Synchronization and control in intrinsic and designed computation: An information-theoretic analysis of competing models of stochastic computation. CHAOS, 20(3):037105, 2010.
  • [10] J. R. Mahoney, C. J. Ellison, R. G. James, and J. P. Crutchfield. How hidden are hidden processes? A primer on crypticity and entropy convergence. CHAOS, 21(3):037112, 2011.
  • [11] D. R. Upper. Theory and Algorithms for Hidden Markov Models and Generalized Hidden Markov Models. PhD thesis, University of California, Berkeley, 1997. Published by University Microfilms Intl, Ann Arbor, Michigan.
  • [12] G. R. Kumar, C. T. Li, and A. El Gamal. Exact common information. In Information Theory (ISIT), 2014 IEEE International Symposium on, pages 161–165. IEEE, 2014.
  • [13] Ryan G James, John R Mahoney, and James P Crutchfield. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states. Physical Review E, 95(6):060102, 2017.
  • [14] H. Akaike. Ann. Inst. Statist. Math., 29A:9, 1977.
  • [15] R. Landauer. Irreversibility and heat generation in the computing process. IBM J. Res. Develop., 5(3):183–191, 1961.
  • [16] S. Kamath and V. Anantharam. A new dual to the Gács-Körner common information defined via the Gray-Wyner system. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 1340–1346. IEEE, 2010.

I Causal States and Löhr States

Since the process considered in the main article is Markov order and its causal states are in one-to-one correspondence with the symbol last seen, we can compactly represent the relation between pasts, causal states, and Löhr states. The mapping from causal states and (or observation symbols and ) to states , , and of the Löhr presentation is given by:


For example, if the last symbol was , this induces causal state . The corresponding Löhr states occur with respective probabilities .

Ii Information Diagram Analysis

The information diagram [7] is a tool that has become increasingly useful for analyzing and characterizing the information content of process presentations [13]. It gives a visual representation of multivariate information-theoretic dependencies. It arises from a duality between information measures and set theory: set-intersection corresponds to mutual information and set-union corresponds to joining distributions. Here, we analyze several information measures introduced in Ref. [9] for the processes considered in the main article; the associated I-diagram is depicted in Fig. 9.

A generic model state has nonzero intersection (mutual information) with both the process’ past and future : and . Importantly, the state information of a model that correctly generates a process—a process presentation—must intersect the past-future union, which completely contains the past-future intersection. We might say that the state of the model “captures” this atom—the excess entropy . Any model that does not capture at least of the past and future cannot correctly generate the process. (Recall that if a model is predictive, then it is also generative.)

Figure 9: I-diagram depicting how different kinds of information are shared among a process’ past , future , (forward) causal states , and generative states . Several information atoms are labeled: the process’ past-future mutual information or excess entropy , the crypticity , the oracular information , and the gauge information .

The is simple not only in the sense that it is constructible, but it is informationally simple as it appears in the information diagram as well. This simplicity follows from the fact that causal states are functions of the past. As a result, state information has no intersection with the future beyond that of that past’s—it contains no, what we call, oracular information . It also contains no information outside the past-future union—it contains not gauge information . Beyond the excess entropy atom, it has only one potential region—what we call the process’ crypticity , where is the process’ statistical complexity.

The situation is richer for general representations, including generative ones; denote them . The more general definition of crypticity is . A general representation may have oracular and gauge information. And, it may have a crypticity greater than, equal to, or less than the crypticity.

Generative models are restricted in two ways. First, their crypticity is never greater than the ’s crypticity: . This follows straightforwardly since if it were greater, then the would necessarily have smaller state entropy, leading to a contradiction. Further, the sum of the generative model’s crypticity, gauge information, and oracular information must be less than the ’s crypticity:

Figure 10: Decomposition of the generative state information into excess entropy , crypticity , oracular information , and gauge information over process space .
Figure 11: Decomposition of for several slices of process space, revealing more directly their functional dependencies.

Second, appealing to our newly introduced generative models for binary Markov chains, we can compare the generative-model crypticity to that of the . A generative model distinct from the must have nonzero oracular information [16]. Also, a generative model may only have nonzero gauge information if it has both crypticity and oracular information [16]:


while still satisfying Eq. (S2). Effectively, this means that gauge information can be “minimized” away, unless it supports the existence of both cryptic and oracular information.

For our parametrized generative models of binary Markov chains we find that decomposes as shown in the two mosaics in Figs. 10 and 11. The gauge information is, generally, a rather small portion of , though it is largest in the same regions as . Both the crypticity and the oracular information are large along the and edges, respectively. This implies that “leans left” when and “leans right” when . This relationship flips if we use the alternative, equivalent model in the half of the process space.

Note that while gauge information is not required of generative models, it is present in all of the binary Markov chain generators, except along boundary and causal collapse lines. Surveys of other processes suggest that the presence of gauge information may be somewhat rare.

Iii Parametrized Binary Markov Process HMM: Derivation

We derive the most generic parametrized, two-state HMM of binary Markov processes. The target binary Markov process is represented by the conditional probability matrix:


As an intermediate step to deriving the full HMM, we find the conditional probabilities involving the machine states relative to the symbols emitted on the transitions to and from a state. We write down two template matrices, denoting the states and and the random variable over them :




there is ambiguity here in labeling the states. If we switch the rows of Eq. (S5) with each other and the columns of Eq. (S6) with each other, then we obtain different matrices that describe the same model. Only the the state labels have been swapped. To avoid double-counting such machines, we restrict .

Multiplying out this factorization and requiring it to be consistent with the target provides two constraints:


(We keep the denominators as so that both the numerator and denominator are positive.) We ask that all these be valid probabilities. Requiring and substituting Eq. (S7) yields the constraints:

Figure 12: As an intermediate step, we construct a skeleton for the fully general HMM. are temporary variables to be solved for eventually. The other transitions are labeled using the fact that, for example, the probabilities of the paths coming from state and emitting a must sum to in order to agree with Eq. S5.

Now, consider the incomplete HMM that defines the helper variables in Fig. 12. To determine the latter’s values, we evaluate the probabilities of a string (a) being generated by this incomplete HMM:


and (b) being generated by the Markov process described in Eq. (S4):


We used the string as our first case. And, , , , and are given by the stationary distributions over the symbols and states, respectively:

Setting Eqs. (S8) and Eq. (S9) equal to one another constrains . To fully specify all four, though, we need three more independent equations. We obtain these by evaluating the probabilities of the strings , , and in a similar fashion. Respectively, these yield:

Solving this system yields:

Finally, we substitute these into Fig. 12 to recover the parametrized HMM the main article introduced in Fig. 5.