. Among the well-established “narrow” Artificial Intelligence (AI) approaches[RN03], arguably Reinforcement Learning (RL) [SB98] pursues most directly the same goal. RL considers the general agent-environment setup in which an agent interacts with an environment (acts and observes in cycles) and receives (occasional) rewards. The agent’s objective is to collect as much reward as possible. Most if not all AI problems can be formulated in this framework. Since the future is generally unknown and uncertain, the agent needs to learn a model of the environment based on past experience, which allows to predict future rewards and use this to maximize expected long-term reward.
The simplest interesting environmental class consists of finite state fully observable Markov Decision Processes (MDPs) [Put94, SB98], which is reasonably well understood. Extensions to continuous states with (non)linear function approximation [SB98, Gor99], partial observability (POMDP) [KLC98, RPPCd08], structured MDPs (DBNs) [SDL07], and others have been considered, but the algorithms are much more brittle.
A way to tackle complex real-world problems is to reduce them to finite MDPs which we know how to deal with efficiently. This approach leaves a lot of work to the designer, namely to extract the right state representation (“features”) out of the bare observations in the initial (formal or informal) problem description. Even if potentially useful representations have been found, it is usually not clear which ones will turn out to be better, except in situations where we already know a perfect model. Think of a mobile robot equipped with a camera plunged into an unknown environment. While we can imagine which image features will potentially be useful, we cannot know in advance which ones will actually be useful.
Main contribution. The primary goal of this paper is to develop and investigate a method that automatically selects those features that are necessary and sufficient for reducing a complex real-world problem to a computationally tractable MDP.
Formally, we consider maps from the past observation-reward-action history of the agent to an MDP state. Histories not worth being distinguished are mapped to the same state, i.e. induces a partition on the set of histories. We call this model
MDP. A state may be simply an abstract label of the partition, but more often is itself a structured object like a discrete vector. Each vector component describes one feature of the history[Hut09a, Hut09c]. For example, the state may be a 3-vector containing (shape,color,size) of the object a robot tracks. For this reason, we call the reduction, Feature RL, although in this Part I only the simpler unstructured case is considered.
maps the agent’s experience over time into a sequence of MDP states. Rather than informally constructing by hand, our goal is to develop a formal objective criterion for evaluating different reductions . Obviously, at any point in time, if we want the criterion to be effective it can only depend on the agent’s past experience and possibly generic background knowledge. The “Cost” of shall be small iff it leads to a “good” MDP representation. The establishment of such a criterion transforms the, in general, ill-defined RL problem to a formal optimization problem (minimizing Cost) for which efficient algorithms need to be developed. Another important question is which problems can profitably be reduced to MDPs [Hut09a, Hut09c].
The real world does not conform itself to nice models: Reality is a non-ergodic partially observable uncertain unknown environment in which acquiring experience can be expensive. So we should exploit the data (past experience) at hand as well as possible, cannot generate virtual samples since the model is not given (need to be learned itself), and there is no reset-option. No criterion for this general setup exists. Of course, there is previous work which is in one or another way related to MDP.
MDP in perspective. As partly detailed later, the suggested MDP model has interesting connections to many important ideas and approaches in RL and beyond:
MDP side-steps the open problem of learning POMDPs [KLC98],
MDP extends the idea of state-aggregation from planning (based on bi-simulation metrics [GDG03]) to RL (based on information),
MDP generalizes U-Tree [McC96] to arbitrary features,
MDP extends model selection criteria to general RL problems [Grü07],
MDP is an alternative to PSRs [SLJ03] for which proper learning algorithms have yet to be developed,
Learning in agents via rewards is a much more demanding task than “classical” machine learning on independently and identically distributed (i.i.d.) data, largely due to the temporal credit assignment and exploration problem. Nevertheless, RL (and the closely related adaptive control theory in engineering) has been applied (often unrivaled) to a variety of real-world problems, occasionally with stunning success (Backgammon, Checkers,[SB98, Chp.11], helicopter control [NCD04]). MDP overcomes several of the limitations of the approaches in the items above and thus broadens the applicability of RL.
MDP owes its general-purpose learning and planning ability to its information and complexity theoretical foundations. The implementation of MDP is based on (specialized and general) search and optimization algorithms used for finding good reductions . Given that MDP aims at general AI problems, one may wonder about the role of other aspects traditionally considered in AI [RN03]: knowledge representation (KR) and logic may be useful for representing complex reductions . Agent interface fields like robotics, computer vision, and natural language processing can speedup learning by pre&post-processing the raw observations and actions into more structured formats. These representational and interface aspects will only barely be discussed in this paper. The following diagram illustrates MDP in perspective.
Contents. Section 2 formalizes our MDP setup, which consists of the agent model with a map from observation-reward-action histories to MDP states. Section 3 develops our core selection principle, which is illustrated in Section 4 on a tiny example. Section 5 discusses general search algorithms for finding (approximations of) the optimal , concretized for context tree MDPs. In Section 6 I find the optimal action for MDP, and present the overall algorithm. Section 7 improves the selection criterion by “integrating” out the states. Section 8 contains a brief discussion of MDP, including relations to prior work, incremental algorithms, and an outlook to more realistic structured MDPs (dynamic Bayesian networks, DBN) treated in Part II.
Rather than leaving parts of MDP vague and unspecified, I decided to give at the very least a simplistic concrete algorithm for each building block, which may be assembled to one sound system on which one can build on.
Notation. Throughout this article, denotes the binary logarithm, the empty string, and if and else is the Kronecker symbol. I generally omit separating commas if no confusion arises, in particular in indices. For any of suitable type (string,vector,set), I define string , sum , union , and vector , where ranges over the full range and is the length or dimension or size of .
denotes an estimate of. and realizations , and abbreviation never leads to confusion. More specifically, denotes the number of states, any state index, the current time, and any time in history. Further, in order not to get distracted at several places I gloss over initial conditions or special cases where inessential. Also 0undefined=0infinity:=0.
2 Feature Markov Decision Process (Mdp)
This section describes our formal setup. It consists of the agent-environment framework and maps from observation-reward-action histories to MDP states. I call this arrangement “Feature MDP” or short MDP.
Agent-environment setup. I consider the standard agent-environment setup [RN03] in which an Agent interacts with an Environment The agent can choose from actions (e.g. limb movements) and the environment provides (regular) observations (e.g. camera images) and real-valued rewards to the agent. The reward may be very scarce, e.g. just () for winning (losing) a chess game, and 0 at all other times [Hut05, Sec.6.3]. This happens in cycles : At time , after observing and receiving reward , the agent takes action based on history . Then the next cycle starts. The agent’s objective is to maximize his long-term reward. Without much loss of generality, I assume that is finite. Finiteness of is lifted in [Hut09a, Hut09c]. I also assume that is finite and small, which is restrictive. Part II deals with large state spaces, and large (structured) action spaces can be dealt with in a similar way. No assumptions are made on ; it may be huge or even infinite. Indeed, MDP has been specifically designed to cope with huge observation spaces, e.g. camera images, which are mapped to a small space of relevant states.
The agent and environment may be viewed as a pair or triple of interlocking functions of the history :
where indicates that mappings might be stochastic.
The goal of AI is to design agents that achieve high (expected) reward over the agent’s lifetime.
(Un)known environments. For known Env(), finding the reward maximizing agent is a well-defined and formally solvable problem [Hut05, Chp.4], with computational efficiency being the “only” matter of concern. For most real-world AI problems Env() is at best partially known. For unknown Env(), the meaning of expected reward maximizing is even conceptually a challenge [Hut05, Chp.5].
Narrow AI considers the case where function Env() is either known (like planning in blocks world), or essentially known (like in chess, where one can safely model the opponent as a perfect minimax player), or Env() belongs to a relatively small class of environments (e.g. elevator or traffic control).
The goal of AGI is to design agents that perform well in a large range of environments [LH07], i.e. achieve high reward over their lifetime with as little as possible assumptions about Env(). A minimal necessary assumption is that the environment possesses some structure or pattern [WM97].
From real-life experience (and from the examples below) we know that usually we do not need to know the complete history of events in order to determine (sufficiently well) what will happen next and to be able to perform well. Let be such a “useful” summary of history .
Generality of MDP. The following examples show that many problems can be reduced (approximately) to finite MDPs, thus showing that MDP can deal with a large variety of problems: In full-information games (like chess) with a static opponent, it is sufficient to know the current state of the game (board configuration) to play well (the history plays no role), hence is a sufficient summary (Markov condition). Classical physics is essentially predictable from the position and velocity of objects at a single time, or equivalently from the locations at two consecutive times, hence is a sufficient summary (2nd order Markov). For i.i.d. processes of unknown probability (e.g. clinical trials Bandits), the frequency of observations is a sufficient statistic. In a POMDP planning problem, the so-called belief vector at time can be written down explicitly as some function of the complete history (by integrating out the hidden states). could be chosen as (a discretized version of) this belief vector, showing that MDP generalizes POMDPs. Obviously, the identity is always sufficient but not very useful, since Env() as a function of is hard to impossible to “learn”.
This suggests to look for with small codomain, which allow to learn/estimate/approximate Env by such that for .
Example. Consider a robot equipped with a camera, i.e.
is a pixel image. Computer vision algorithms usually extract a set of features from(or ), from low-level patterns to high-level objects with their spatial relation. Neither is it possible nor necessary to make a precise prediction of from summary . An approximate prediction must and will do. The difficulty is that the similarity measure “” needs to be context dependent. Minor image nuances are irrelevant when driving a car, but when buying a painting it makes a huge difference in price whether it’s an original or a copy. Essentially only a bijection would be able to extract all potentially interesting features, but such a defeats its original purpose.
From histories to states. It is of utmost importance to properly formalize the meaning of “” in a general, domain-independent way. Let summarize all relevant information in history . I call a state or feature (vector) of . “Relevant” means that the future is predictable from (and ) alone, and that the relevant future is coded in . So we pass from the complete (and known) history to a “compressed” history and seek such that is (approximately a stochastic) function of (and ). Since the goal of the agent is to maximize his rewards, the rewards are always relevant, so they (have to) stay untouched (this will become clearer below).
The MDP. The structure derived above is a classical Markov Decision Process (MDP), but the primary question I ask is not the usual one of finding the value function or best action or comparing different models of a given state sequence. I ask how well can the state-action-reward sequence generated by be modeled as an MDP compared to other sequences resulting from different . A good leads to a good model for predicting future rewards, which can be used to find good actions that maximize the agent’s expected long-term reward.
3 MDP Coding and Evaluation
I first review a few standard codes and model selection methods for i.i.d. sequences, subsequently adapt them to our situation, and show that they are suitable in our context. I state my Cost function for , and the selection principle, and compare it to the Minimum Description Length (MDL) philosophy.
I.i.d. processes. Consider i.i.d. for finite . For known we have . It is well-known that there exists a code (e.g. arithmetic or Shannon-Fano) for of length , which is asymptotically optimal with probability one [Bar85, Thm.3.1]. This also easily follows from [CT06, Thm.5.10.1].
(). We also need to code , or equivalently , which naively needs bits for each . In general, a sample size of allows estimating parameters only to accuracy , which is essentially equivalent to the fact that . This shows that it is sufficient to code each to accuracy , which requires only bits each. Hence, given and ignoring terms, the overall code length (CL) of for unknown frequencies is
where and . We have assumed that is given, hence only of the need to be coded, since the th one can be reconstructed from them and . The above is an exact code of , which is optimal (within ) for all i.i.d. sources. This code may further be optimized by only coding for the non-empty categories, resulting in a code of length
where the bits are needed to indicate which of the are coded. We refer to this improvement as sparse code.
Combinatorial code [LV08]: A second way to code the data is to code exactly, and then, since there are sequences with counts , we can easily construct a code of length given by enumeration, i.e.
Within this code length also coincides with (1).
Incremental code [WST97]: A third way is to use a sequential estimate based on known past counts , where is some regularizer. Then
where is the Gamma function. The logarithm of this expression again essentially reduces to (1) (for any , typically or 1), which can also be written as
Bayesian code [Sch78, Mac03]: A fourth (the Bayesian) way is to assume a Dirichlet() prior over . The marginal distribution (evidence) is identical to (3) and the Bayesian Information Criterion (BIC) approximation leads to code (1).
Conclusion: All four methods lead to essentially the same code length. The references above contain rigorous derivations. In the following I will ignore the terms and refer to (1) simply as the code length. Note that is coded exactly (lossless). Similarly (see MDP below) sampling models more complex than i.i.d. may be considered, and the one that leads to the shortest code is selected as the best model [Grü07].
MDP definitions. Recall that a sequence is said to be sampled from an MDP iff the probability of only depends on and ; and only on , , and . That is,
In our case, we can identify the state-space with the states “observed” so far. Hence is finite and typically , since states repeat. Let be shorthand for “action in state resulted in state (reward )”. Let be the set of times at which , and their number ().
Coding MDP sequences. For some fixed and , consider the subsequence of states reached from via (), i.e. , where . By definition of an MDP, this sequence is i.i.d. with occurring times. By (1) we can code this sequence in bits. The whole sequence consists of i.i.d. sequences, one for each . We can join their codes and get a total code length
Similarly to the states we code the rewards. There are different “standard” reward models. I consider only the simplest case of a small discrete reward set like or here and defer generalizations to and a discussion of variants to the DBN model [Hut09a]. By the MDP assumption, for each triple, the rewards at times are i.i.d. Hence they can be coded in
bits. In order to increase the statistics it might be better to treat as a function of only. This is not restrictive, since dependence on and can be mimicked by coding aspects into an enlarged state space.
Rewardstate trade-off. Note that the code for depends on . Indeed we may interpret the construction as follows: Ultimately we/the agent cares about the reward, so we want to measure how well we can predict the rewards, which we do with (5). But this code depends on , so we need a code for too, which is (4). To see that we need both parts consider two extremes.
A simplistic state transition model (small ) results in a short code for . For instance, for , nothing needs to be coded and (4) is identically zero. But this obscures potential structure in the reward sequence, leading to a long code for .
On the other hand, the more detailed the state transition model (large ) the easier it is to predict and hence compress . But a large model is hard to learn, i.e. the code for will be large. For instance for , no state repeats and the frequency-based coding breaks down.
selection principle. Let us define the Cost of on as the length of the MDP code for given plus a complexity penalty for :
The discussion above suggests that the minimum of the joint code length (4) and (5) is attained for a that keeps all and only relevant information for predicting rewards. Such a may be regarded as best explaining the rewards. I added an additional complexity penalty for such that from the set of that minimize (4)+(5) (e.g. ’s identical on but different on longer histories) the simplest one is selected. The penalty is usually some code-length or log-index of . This conforms with Ockham’s razor and the MDL philosophy. So we are looking for a of minimal cost:
If the minimization is restricted to some small class of reasonably simple , in (6) may be dropped. The state sequence generated by (or approximations thereof) will usually only be approximately MDP. While is an optimal code only for MDP sequences, it still yields good codes for approximate MDP sequences. Indeed, balances closeness to MDP with simplicity. The primary purpose of the simplicity bias is not computational tractability, but generalization ability [Leg08, Hut05].
Relation to MDL et al.
In unsupervised learning (clustering and density estimation) and supervised learning (regression and classification), penalized maximum likelihood criteria[HTF01, Chp.7] like BIC [Sch78], MDL [Grü07], and MML [Wal05]
have successfully been used for semi-parametric model selection. It is far from obvious how to apply them in RL. Indeed, our derived Cost function cannot be interpreted as a usual model+data code length. The problem is the following:
Ultimately we do not care about the observations but the rewards. The rewards depend on the states, but the states are arbitrary in the sense that they are model-dependent functions of the bare data (observations). The existence of these unobserved states is what complicates matters, but their introduction is necessary in order to model the rewards. For instance, is actually not needed for coding , so from a strict coding/MDL perspective, in (6) is redundant. Since is some “arbitrary” construct of , it is better to regard (6) as a code of only. Since the agent chooses his actions, need not be coded, and is not coded, because they are only of indirect importance.
The Cost() criterion is strongly motivated by the rigorous MDL principle, but invoked outside the usual induction/modeling/prediction context.
4 A Tiny Example
The purpose of the tiny example in this section is to provide enough insight into how and why MDP works to convince the reader that our selection principle is reasonable.
Example setup. I assume a simplified MDP model in which reward only depends on , i.e.
This allows us to illustrate MDP on a tiny example. The same insight is gained using (5) if an analogous larger example is considered. Furthermore I set .
Consider binary observation space , quaternary reward space , and a single action . Observations are independent fair coin flips, i.e. Bernoulli(), and reward a deterministic function of the two most recent observations.
Considered features. As features I consider with for various which regard the last observations as “relevant”. Intuitively is the best observation summary, which I confirm below. The state space (for sufficiently large ). The MDPs for are as follows.
MDP with all non-zero transition probabilities being 50% is an exact representation of our data source. The missing arrow (directions) are due to the fact that can only lead to for which , denoted by in the following. Note that MDP does not “know” this and has to learn the (non)zero transition probabilities. Each state has two successor states with equal probability, hence generates (see previous paragraph) a Bernoulli() state subsequence and a constant reward sequence, since the reward can be computed from the state = last two observations. Asymptotically, all four states occur equally often, hence the sequences have approximately the same length .
where the extra argument just indicates the sequence property. So for MDP we get
The log-terms reflect the required memory to code the MDP structure and probabilities. Since each state has only 2 realized/possible successors, we need bits to code the state sequence. The reward is a deterministic function of the state, hence needs no memory to code given .
The MDP throws away all observations (left figure above), hence . While the reward sequence is not i.i.d. (e.g. cannot follow ), MDP has no choice regarding them as i.i.d., resulting in .
The MDP model is an interesting compromise (middle figure above). The state allows a partial prediction of the reward: State 0 allows rewards 0 and 2; state 1 allows rewards 1 and 3. Each of the two states creates a Bernoulli() state successor subsequence and a binary reward sequence, wrongly presumed to be Bernoulli(). Hence and .
Summary. The following table summarizes the results for general and beyond:
The notation of the column follows the one used above in the text ( for and ). means that is the correct reward for state . The last column is the sum of the two preceding columns. The part linear in is the code length for the state/reward sequence. The part logarithmic in is the code length for the transition/reward probabilities of the MDP; each parameter needs bits. For large , results in the shortest code, as anticipated. The “approximate” model is just not good enough to beat the vacuous model , but in more realistic examples some approximate model usually has the shortest code. In [Hut09a] I show on a more complex example how will store long-term information in a POMDP environment.
5 Cost() Minimization
So far I have reduced the reinforcement learning problem to a formal -optimization problem. This section briefly explains what we have gained by this reduction, and provide some general information about problem representations, stochastic search, and neighborhoods. Finally I present a simplistic but concrete algorithm for searching context tree MDPs.
search. I now discuss how to find good summaries . The introduced generic cost function , based on only the known history , makes this a well-defined task that is completely decoupled from the complex (ill-defined) reinforcement learning objective. This reduction should not be under-estimated. We can employ a wide range of optimizers and do not even have to worry about overfitting. The most challenging task is to come up with creative algorithms proposing ’s.
There are many optimization methods: Most of them are search-based: random, blind, informed, adaptive, local, global, population based, exhaustive, heuristic, and other search methods[AL97]. Most are or can be adapted to the structure of the objective function, here . Some exploit the structure more directly (e.g. gradient methods for convex functions). Only in very simple cases can the minimum be found analytically (without search).
Most search algorithms require the specification of a neighborhood relation or distance between candidate , which I define in the 2nd next paragraph.
Problem representation can be important: Since is a discrete function, searching through (a large subset of) all computable functions, is a non-restrictive approach. Variants of Levin search [Sch04, Hut05]Koz92, BNKF98]Pea89, RHHM08] are the major approaches in this direction.
A different representation is as follows: effectively partitions the history space and identifies each partition with a state. Conversely any partition of can (up to a renaming of states) uniquely be characterized by a function . Formally, induces a (finite) partition of , where ranges over the codomain of . Conversely, any partition of induces a function iff , which is equivalent to apart from an irrelevant permutation of the codomain (renaming of states).
State aggregation methods have been suggested earlier for solving large-scale MDP planning problems by grouping (partitioning) similar states together, resulting in (much) smaller block MDPs [GDG03]. But the used bi-simulation metrics require knowledge of the MDP transition probabilities, while our Cost criterion does not.
Decision trees/lists/grids/etc. are essentially space partitioners. The most powerful versions are rule-based, in which logical expressions recursively divide domain into “true/false” regions [DdRD01, SB09].
neighborhood relation. A natural “minimal” change of a partition is to subdivide=split a partition or merge (two) partitions. Moving elements from one partition to another can be implemented as a split and merge operation. In our case this corresponds to splitting and merging states (state refinement and coarsening). Let split some state of into
where the histories mapped to state are distributed among and according to some splitting rule (e.g. randomly). The new state space is . Similarly merges states into if
where . We can regard as being a neighbor of or similar to .
Stochastic search. Stochastic search is the method of choice for high-dimensional unstructured problems. Monte Carlo methods can actually be highly effective, despite their simplicity [Liu02, Fis03]. The general idea is to randomly choose a neighbor of and replace by if it is better, i.e. has smaller Cost. Even if we may keep , but only with some (in the cost difference exponentially) small probability. Simulated annealing is a version which minimizes . Apparently, of small cost are (much) more likely to occur than high cost .
Context tree example. The in Section 4 depended on the last observations. Let us generalize this to a context dependent variable length: Consider a finite complete suffix free set of strings (= prefix tree of reversed strings) as our state space (e.g. for binary ), and define iff , i.e. is the part of the history regarded as relevant. State splitting and merging works as follows: For binary , if history part of is deemed too short, we replace by and in , i.e. . If histories are deemed too long, we replace them by , i.e. . Large might be coded binary and then treated similarly. For small we have the following simple -optimizer:
Randomly choose a state ;
Let and be uniform random numbers in ;
if then split i.e.
else if ( is without the first symbol)
then merge them, i.e. ;
if then ;
6 Exploration & Exploitation
Having obtained a good estimate of in the previous section, we can/must now determine a good action for our agent. For a finite MDP with known transition probabilities, finding the optimal action is routine. For estimated probabilities we run into the infamous exploration-exploitation problem, for which promising approximate solutions have recently been suggested [SL08]. At the end of this section I present the overall algorithm for our MDP agent.
Optimal actions for known MDPs. For a known finite MDP , the maximal achievable (“optimal”) expected future discounted reward sum, called () alue (of action ) in state , satisfies the following (Bellman) equations [SB98]
where is a discount parameter, typically close to 1. See [Hut05, Sec.5.7] for proper choices. The equations can be solved by a simple (e.g. value or policy) iteration process or various other methods or in guaranteed polynomial time by dynamic programming [Put94]. The optimal next action is
Estimating the MDP. We can estimate the transition probability by
It is easy to see that the Shannon-Fano code of based on plus the code of the (non-zero) transition probabilities to relevant accuracy has length (4), i.e. the frequency estimate (11) is consistent with the attributed code length. The expected reward can be estimated as
Exploration. Simply replacing and in (9) and (10) by their estimates (11) and (12) can lead to very poor behavior, since parts of the state space may never be explored, causing the estimates to stay poor.
Estimate improves with increasing , which can (only) be ensured by trying all actions in all states sufficiently often. But the greedy policy above has no incentive to explore, which may cause the agent to perform very poorly: The agent stays with what he believes to be optimal without trying to solidify his belief. For instance, if treatment cured the first patient, and treatment killed the second, the greedy agent will stick to treatment and not explore the possibility that may just have failed due to bad luck. Trading off exploration versus exploitation optimally is computationally intractable [Hut05, PVHR06, RP08] in all but extremely simple cases (e.g. Bandits [BF85, KV86]). Recently, polynomially optimal algorithms (Rmax,E3,OIM) have been invented [KS98, BT02, SL08]: An agent is more explorative if he expects a high reward in the unexplored regions. We can “deceive” the agent to believe this by adding another “absorbing” high-reward state to , not in the range of , i.e. never observed. Henceforth, denotes the extended state space. For instance in (11) now includes . We set
for all , where exploration bonus is polynomially (in and ) larger than [SL08].
Now compute the agent’s action by (9)-(12) but for the extended . The optimal policy tries to find a chain of actions and states that likely leads to the high reward absorbing state . Transition is only “large” for small , hence has a bias towards unexplored (state,action) regions. It can be shown that this algorithm makes only a polynomial number of sub-optimal actions.
The overall algorithm for our MDP agent is as follows.
7 Improved Cost Function
As discussed, we ultimately only care about (modeling) the rewards, but this endeavor required introducing and coding states. The resulting Cost() function is a code length of not only the rewards but also the “spurious” states. This likely leads to a too strong penalty of models with large state spaces . The proper Bayesian formulation developed in this section allows to “integrate” out the states. This leads to a code for the rewards only, which better trades off accuracy of the reward model and state space size.
For an MDP with transition and reward probabilities and , the probabilities of the state and reward sequences are
The probability of can be obtained by taking the product and marginalizing :
where for each and , matrix is defined as . The right -fold matrix product can be evaluated in time . This shows that given and can be coded in bits. The unknown needs to be estimated, e.g. by the relative frequency . Note that completely ignores the observations and is essentially independent of . Map and hence enter (only and crucially) via the estimate . The (independent) elements of can be coded to sufficient accuracy in bits, and will be coded in bits. Together this leads to a code for of length
In practice, can and should be chosen smaller like done in the original Cost function, and/or by using the restrictive model (8) for , and/or by considering only non-zero frequencies (2). Analogous to (7) we seek a that minimizes ICost().
Since action evaluation is based on (discounted) reward sums, not individual rewards, one may think of marginalizing even further, or coding rewards only approximately. Unfortunately, the algorithms in Section 6 that learn, explore, and exploit MDPs require knowledge of the (exact) individual rewards, so this improvement is not feasible.
This section summarizes MDP, relates it to previous work, and hints at more efficient incremental implementations and more realistic structured MDPs (dynamic Bayesian networks).
Summary. Learning from rewards in general environments is an immensely complex problem. In this paper I have developed a generic reinforcement learning algorithm based on sound principles. The key idea was to reduce general learning problems to finite state MDPs for which efficient learning, exploration, and exploitation algorithms exist. For this purpose I have developed a formal criterion for evaluating and selecting good “feature” maps from histories to states. One crucial property of MDP is that it neither requires nor learns a model of the complete observation space, but only for the reward-relevant observations as summarized in the states. The developed criterion has been inspired by MDL, which recommends to select the (coding) model that minimizes the length of a suitable code for the data at hand plus the complexity of the model itself. The novel and tricky part in MDP was to deal with the states, since they are not bare observations, but model-dependent processed data. An improved Bayesian criterion, which integrates out the states, has also been derived. Finally, I presented a complete feature reinforcement learning algorithm MDP-Agent(). The building blocks and computational flow are depicted in the following diagram: