Markov Model

Markov Model

Markov models incorporate the principles of the Markov property, as defined by Russian mathematician Andrey Markov in 1906. In short, the prediction of an outcome is based solely on the information provided by the current state, not on the sequence of events that occurred before. The four main forms of Markov models are the Markov chain, Markov decision process, hidden Markov model, and the partially observable Markov decision process. The specific uses of each of these models are dependent on two factors; whether or not the system state is fully observable, and if the system is controlled or fully autonomous.



State is fully observableState is partially observable
System is autonomousMarkov ChainHidden Markov Model
System is controlledMarkov decision processPartially observable Markov decision process

Markov Models and Machine Learning

A machine learning algorithm can apply Markov models to decision making processes regarding the prediction of an outcome. If the process is entirely autonomous, meaning there is no feedback that may influence the outcome, a Markov chain may be used to model the outcome. However, some machine learning algorithms apply what is known as reinforcement learning. Reinforcement learning is the process of maximizing an aspect of utility to achieve a cumulative reward. In situations where there is an added factor of a reward, the Markov decision process would be the most applicable model. 

Image from towardsdatascience.com