Algebraic Reduction of Hidden Markov Models
The problem of reducing a Hidden Markov Model (HMM) to a one of smaller dimension that exactly reproduces the same marginals is tackled by using a system-theoretic approach, adapted to HMMs by leveraging on a suitable algebraic representation of probability spaces. We propose two algorithms that return coarse-grained equivalent HMMs obtained by stochastic projection operators: the first returns models that reproduce the single-time distribution of a given output process, while in the second the full (multi-time) distribution is preserved. The reduction method exploits not only the structure of the observed output, but also its initial condition, whenever the latter is known or belongs to a given subclass. Optimal algorithms are derived for a class of HMM, namely observable ones. In the general case, we propose algorithms that have produced minimal models for all the examples we analyzed, and conjecture their optimality.
READ FULL TEXT