DeepAI AI Chat
Log In Sign Up

Reduction of Markov Chains using a Value-of-Information-Based Approach

by   Isaac J. Sledge, et al.
University of Florida
U.S. Navy

In this paper, we propose an approach to obtain reduced-order models of Markov chains. Our approach is composed of two information-theoretic processes. The first is a means of comparing pairs of stationary chains on different state spaces, which is done via the negative Kullback-Leibler divergence defined on a model joint space. Model reduction is achieved by solving a value-of-information criterion with respect to this divergence. Optimizing the criterion leads to a probabilistic partitioning of the states in the high-order Markov chain. A single free parameter that emerges through the optimization process dictates both the partition uncertainty and the number of state groups. We provide a data-driven means of choosing the `optimal' value of this free parameter, which sidesteps needing to a priori know the number of state groups in an arbitrary chain.


page 5

page 8

page 11

page 14

page 15

page 17


Markov models for ocular fixation locations in the presence and absence of colour

We propose to model the fixation locations of the human eye when observi...

Information-Theoretic Reduction of Markov Chains

We survey information-theoretic approaches to the reduction of Markov ch...

Path-entropy maximized Markov chains for dimensionality reduction

Stochastic kernel based dimensionality reduction methods have become pop...

Partitioning Relational Matrices of Similarities or Dissimilarities using the Value of Information

In this paper, we provide an approach to clustering relational matrices ...

Learning Mixtures of Markov Chains with Quality Guarantees

A large number of modern applications ranging from listening songs onlin...