This project collects the different accepted papers and their link to Arxiv or Gitxiv
We propose and study a new model for reinforcement learning with rich observations, generalizing contextual bandits to sequential decision making. These models require an agent to take actions based on observations (features) with the goal of achieving long-term performance competitive with a large set of policies. To avoid barriers to sample-efficient learning associated with large observation spaces and general POMDPs, we focus on problems that can be summarized by a small number of hidden states and have long-term rewards that are predictable by a reactive function class. In this setting, we design and analyze a new reinforcement learning algorithm, Least Squares Value Elimination by Exploration. We prove that the algorithm learns near optimal behavior after a number of episodes that is polynomial in all relevant parameters, logarithmic in the number of policies, and independent of the size of the observation space. Our result provides theoretical justification for reinforcement learning with function approximation.READ FULL TEXT VIEW PDF
This project collects the different accepted papers and their link to Arxiv or Gitxiv
The Atari Reinforcement Learning research program  has highlighted a critical deficiency of practical reinforcement learning algorithms in settings with rich observation spaces: they cannot effectively solve problems that require sophisticated exploration. How can we construct Reinforcement Learning (RL) algorithms which effectively plan and plan to explore?
In RL theory, this is a solved problem for Markov Decision Processes (MDPs)[13, 6, 26]. Why do these results not apply?
An easy response is, “because the hard games are not MDPs.” This may be true for some of the hard games, but it is misleading—popular algorithms like -learning with -greedy exploration do not even engage in minimal planning and global exploration111We use “global exploration” to distinguish the sophisticated exploration strategies required to solve an MDP efficiently from exponentially less efficient alternatives such as -greedy. as is required to solve MDPs efficiently. MDP-optimized global exploration has also been avoided because of a polynomial dependence on the number of unique observations which is intractably large with observations from a visual sensor.
In contrast, supervised and contextual bandit learning algorithms have no dependence on the number of observations and at most a logarithmic dependence on the size of the underlying policy set. Approaches to RL with a weak dependence on these quantities exist  but suffer from an exponential dependence on the time horizon—with actions and a horizon of , they require samples. Examples show that this dependence is necessary, although they typically require a large number of states. Can we find an RL algorithm with no dependence on the number of unique observations and a polynomial dependence on the number of actions , the number of necessary states , the horizon , and the policy complexity ?
To begin answering this question we consider a simplified setting with episodes of bounded length and deterministic state transitions. We further assume that we have a function class that contains the optimal observation-action value function . These simplifications make the problem significantly more tractable without trivializing the core goal of designing a algorithm. To this end, our contributions are:
A new class of models for studying reinforcement learning with rich observations. These models generalize both contextual bandits and small-state MDPs, but do not exhibit the partial observability issues of more complex models like POMDPs. We show exponential lower bounds on sample complexity in the absence of the assumptions to justify our model.
A new reinforcement learning algorithm Least Squares Value Elimination by Exploration (LSVEE) and a PAC guarantee that it finds a policy that is at most sub-optimal (with the above assumptions) using samples, with no dependence on the number of unique observations. This is done by combining ideas from contextual bandits with a novel state equality test and a global exploration technique. Like initial contextual bandit approaches , the algorithm is computationally inefficient since it requires enumeration of the policy class, an aspect we hope to address in future work.
LSVEE uses a function class to approximate future rewards, and thus lends theoretical backing for reinforcement learning with function approximation, which is the empirical state-of-the-art.
Our model is a Contextual Decision Process, a term we use broadly to refer to any sequential decision making task where an agent must make decision on the basis of rich features (context) to optimize long-term reward. In this section, we introduce the model, starting with basic notation. Let denote an episode length, an observation space, a finite set of actions, and a finite set of latent states. Let . We partition into disjoint groups , each of size at most . For a set , denotes the set of distributions over .
Our model is defined by the tuple where denotes a starting state distribution, denotes the transition dynamics, and associates a distribution over observation-reward pairs with each state . We also use to denote the marginal distribution over observations (usage will be clear from context) and use for the conditional distribution over reward given the observation in state
. The marginal and conditional probabilities are referred to asand .
We assume that the process is layered (also known as loop-free or acyclic) so that for any and action , . Thus, the environment transitions from state space up to via a sequence of actions. Layered structure allows us to avoid indexing policies and -functions with time, which enables concise notation.
Each episode produces a full record of interaction where , , and all actions are chosen by the learning agent. The record of interaction observed by the learner is and at time point , the learner may use all observable information up to and including to select . Notice that all state information and rewards for alternative actions are unobserved by the learning agent.
The learner’s reward for an episode is , and the goal is to maximize the expected cumulative reward, , where the expectation accounts for all the randomness in the model and the learner. We assume that almost surely for any action sequence.
In this model, the optimal expected reward achievable can be computed recursively as
As the base case, we assume that for states , all actions transition to a terminal state with . For each pair such that we also define a function as
This function captures the optimal choice of action given this (state, observation) pair and therefore encodes optimal behavior in the model.
With no further assumptions, the above model is a layered episodic Partially Observable Markov Decision Process (LE-POMDP). Both learning and planning are notoriously challenging in POMDPs, because the optimal policy depends on the entire trajectory and the complexity of learning such a policy grows exponentially with (see e.g. Kearns et al.  as well as Propositions 2.1 and 2.1 below). Our model avoids this statistical barrier with two assumptions: (a) we consider only reactive policies, and (b) we assume access to a class of functions that can realize the function. Both assumptions are implicit in the empirical state of the art RL results. They also eliminate issues related to partial observability, allowing us to focus on our core goal of systematic exploration. We describe both assumptions in detail before formally defining the model.
Reactive Policies: One approach taken by some prior theoretical work is to consider reactive (or memoryless) policies that use only the current observation to select an action [20, 4]. Memorylessness is slightly generalized in the recent empirical advances in RL, which typically employ policies that depend only on the few most recent observations .
A reactive policy is a strategy for navigating the search space by taking actions given observation . The expected reward for a policy is defined recursively through
A natural learning goal is to identify a policy with maximal value from a given collection of reactive policies . Unfortunately, even when restricting to reactive policies, learning in POMDPs requires exponentially many samples, as we show in the next lower bound. Fix with and . For any algorithm, there exists a LE-POMDP with horizon , actions, and total states; a class of reactive policies with ; and a constant such that the probability that the algorithm outputs a policy with after collecting trajectories is at most for all . This lower bound precludes a sample complexity bound for learning reactive policies in general POMDPs as in the construction, but the number of samples required is exponential in . The lower bound instance provides essentially no instantaneous feedback and therefore forces the agent to reason over paths independently.
Predictability of : The assumption underlying the empirical successes in RL is that the function can be well-approximated by some large set of functions . To formalize this assumption, note that for some POMDPs, we may be able to write as a function of the observed history at time . For example, this is always true in deterministic-transition POMDPs, since the sequence of previous actions encodes the state and as in Eq. (2) depends only on the state, the current observation, and the proposed action. In the realizable setting, we have access to a collection of functions mapping the observed history to , and we assume that .
Unfortunately, even with realizability, learning in POMDPs can require exponentially many samples. Fix with and . For any algorithm, there exists a LE-POMDP with time horizon , actions, and total states; a class of predictors with and ; and a constant such that the probability that the algorithm outputs a policy with after collecting trajectories is at most for all . As with Proposition 2.1, this lower bound precludes a sample complexity bound for learning POMDPs with realizability. The lower bound shows that even with realizability, the agent may have to reason over paths independently since the functions can depend on the entire history. Proofs of both lower bounds here are deferred to Appendix A.
Both lower bounds use POMDPs with deterministic transitions and an extremely small observation space. Consequently, even learning in deterministic-transition POMDPs requires further assumptions.
As we have seen, neither restricting to reactive policies, nor imposing realizability enable tractable learning in POMDPs on their own. Combined however, we will see that sample-efficient learning is possible, and the combination of these two assumptions is precisely how we characterize our model. Specifically, we study POMDPs for which can be realized by a predictor that uses only the current observation and proposed action.
[Reactive Value Functions] We assume that for all and any two state such that , we have .
The restriction on implies that the optimal policy is reactive and also that the optimal predictor of long-term reward depends only on the current observation. In the following section, we describe how this condition relates to other RL models in the literature. We first present a natural example.
[Disjoint observations] The simplest example is one where each state can be identified with a subset with only for and where when . A realized observation then uniquely identifies the underlying state so that Assumption 2.2 trivially holds, but this mapping from to is unknown to the agent. Thus, the problem cannot be easily reduced to a small-state MDP. This setting is quite natural in several robotics and navigation tasks, where the visual signals are rich enough to uniquely identify the agent’s position (and hence state). It also applies to video game playing, where the raw pixel intensities suffice to decode the game’s memory state, but learning this mapping is challenging.
Thinking of as the state, the above example is an MDP with infinite state space but with structured transition operator. While our model is more general, we are primarily motivated by these infinite-state MDPs, for which the reactivity assumptions are completely non-restrictive. For infinite-state MDPs, our model describes a particular structure on the transition operator that we show enables efficient learning. We emphasize that our focus is not on partial observability issues.
As we are interested in understanding function approximation, we make a realizability assumption. [Realizability] We are given access to a class of predictors of size and assume that . We identify each predictor with a policy . Observe that the optimal policy is which satisfies .
[Deterministic Transitions] We assume that the transition model is deterministic. This means that the starting distribution is a point-mass on some state and .
Even with deterministic transitions, learning requires systematic global exploration that is unaddressed in previous work. Recall that the lower bound constructions for Propositions 2.1 and 2.1 actually use deterministic transition POMDPs. Therefore, deterministic transitions combined with either the reactive or the realizability assumption by itself still precludes tractable learning. Nevertheless, we hope to relax this final assumption in future work.
More broadly, this model provides a framework to reason about reinforcement learning with function approximation. This is highly desirable as such approaches are the empirical state-of-the-art, but the limited supporting theory provides little advice on systematic global exploration.
The above model is closely related to several well-studied models in the literature, namely:
Contextual Bandits: If , then our model reduces to stochastic contextual bandits [16, 8], a well-studied simplification of the general reinforcement learning problem. The main difference is that the choice of action does not influence the future observations (there is only one state), and algorithms do not need to perform long-term planning to obtain low sample complexity.
Markov Decision Processes: If and for each state is concentrated on , then our model reduces to small-state MDPs, which can be efficiently solved by tabular approaches [13, 6, 26]. The key differences in our setting are that the observation space is extremely large or infinite and the underlying state is unobserved, so tabular methods are not viable and algorithms need to generalize across observations.
When the number of states is large, existing methods typically require exponentially many samples such as the result of Kearns et al. . Others depend poorly on the complexity of the policy set or scale linearly in the size of a covering over the state space [12, 10, 23]. Lastly, policy gradient methods avoid dependence on size of the state space, but do not achieve global optimality [27, 11] in theory and in practice, unlike our algorithm which is guaranteed to find the globally optimal policy.
POMDPs: By definition our model is a POMDP where the function is consistent across states. This restriction implies that the agent does not have to reason over belief states as is required in POMDPs. There are some sample complexity guarantees for learning in arbitrarily complex POMDPs, but the bounds we are aware of are quite weak as they scale linearly with [14, 19], or require discrete observations from a small set .
State Abstraction: State abstraction (see  for a survey) focuses on understanding what optimality properties are preserved in an MDP after the state space is compressed. While our model does have a small number of underlying states, they do not necessarily admit non-trivial state abstractions that are easy to discover (i.e. that do not amount to learning the optimal behavior) as the optimal behavior can depend on the observation in an arbitrary manner. Furthermore, most sample complexity results cannot search over large abstraction sets (see e.g. Jiang et al. ), limiting their scope.
Function Approximation: Our approach uses function approximation to address the generalization problem implicit in our model. Function approximation is the empirical state-of-the-art in reinforcement learning , but theoretical analysis has been quite limited. Several authors have studied linear or more general function approximation (See [28, 24, 5]), but none of these results give finite sample bounds, as they do not address the exploration question. Li and Littman  do give finite sample bounds, but they assume access to a “Knows-what-it-knows” (KWIK) oracle, which cannot exist even for simple problems. Other theoretical results either make stronger realizability assumptions (c.f., ) or scale poorly with problem parameters (e.g., polynomial in the number of functions  or the size of the observation space ).
We consider the task of Probably Approximately Correct (PAC) learning the models defined in Section 2. Given (Assumption 2.2), we say that an algorithm PAC learns our model if for any , the algorithm outputs a policy satisfying with probability at least . The sample complexity is a function such that for any , the algorithm returns an -suboptimal policy with probability at least using at most episodes. We refer to a sample complexity bound as polynomial in all relevant parameters. Notably, there should be no dependence on , which may be infinite.
Before turning to the algorithm, it is worth clarifying some additional notation. Since we are focused on the deterministic transition setting, it is natural to think about the environment as an exponentially large search tree with fan-out and depth . Each node in the search tree is labeled with an (unobserved) state , and each edge is labeled with an action , consistent with the transition model. A path is a sequence of actions from the root of the search tree, and we also use to denote the state reached after executing the path from the root. Thus, is the observation distribution of the state at the end of the path . We use to denote a path formed by executing all actions in and then executing action , and we use to denote the length of the path. Let denote the empty path, which corresponds to the root of the search tree.
The pseudocode for the algorithm, which we call Least Squares Value Elimination by Exploration (LSVEE), is displayed in Algorithm 1 (See also Appendix B). LSVEE has two main components: a depth-first-search routine with a learning step (step 6 in Algorithm 2) and an on-demand exploration technique (steps 5-8 in Algorithm 1). The high-level idea of the algorithm is to eliminate regression functions that do not meet Bellman-like consistency properties of the function. We now describe both components and their properties in detail.
The DFS routine: When the DFS routine, displayed in Algorithm 2, is run at some path , we first decide whether to recursively expand the descendants by performing a consensus test. Given a path , this test, displayed in Algorithm 3, computes estimates of value predictions,
for all the surviving regressors. These value predictions are easily estimated by collecting many observations after rolling in to and using empirical averages (See line 2 in Algorithm 3). If all the functions agree on this value for the DFS need not visit this path.
After the recursive calls, the DFS routine performs the elimination step (line 6). When this step is invoked at path , the algorithm collects observations where , is chosen uniformly at random, and and eliminates regressors that have high empirical risk,
Intuition for DFS: This regression problem is motivated by the realizability assumption and the definition of in Eq. (2), which imply that at path and for all actions ,
Thus is consistent between its estimate at the current state and the future state .
The regression problem (4) is essentially a finite sample version of this identity. However, some care must be taken as the target for the regression function includes , which is
’s value prediction for the future. The fact that the target differs across functions can cause instability in the regression problem, as some targets may have substantially lower variance than’s. To ensure correct behavior, we must obtain high-quality future value prediction estimates, and so, we re-use the Monte-Carlo estimates in Eq. (3) from the consensus tests. Each time we perform elimination, the regression targets are close for all considered in Equation (4) owing to consensus being satisfied at the successor nodes in Step 2 of Algorithm 2.
Given consensus at all the descendants, each elimination step inductively propagates learning towards the start state by ensuring the following desirable properties hold: (i) is not eliminated, (ii) consensus is reached at , and (iii) surviving policies choose good actions at . Property (ii) controls the sample complexity, since consensus tests at state return true once elimination has been invoked on , so DFS avoids exploring the entire search space. Property (iii) leads to the PAC-bound; if we have run the elimination step on all states visited by a policy, that policy must be near-optimal.
To bound the sample complexity of the DFS routine, since there are states per level and the consensus test returns true once elimination has been performed, we know that the DFS does not visit a large fraction of the search tree. Specifically, this means DFS is invoked on at most nodes in total, so we run elimination at most times, and we perform at most consensus tests. Each of these operations requires polynomially many samples.
The elimination step is inspired by the RegressorElimination algorithm of Agarwal et. al  for contextual bandit learning in the realizable setting. In addition to forming a different regression problem, RegressorElimination carefully chooses actions to balance exploration and exploitation which leads to an optimal regret bound. In contrast, we are pursuing a PAC-guarantee here, for which it suffices to focus exclusively on exploration.
On-demand Exploration: While DFS is guaranteed to estimate the optimal value , it unfortunately does not identify the optimal policy. For example, if consensus is satisfied at a state without invoking the elimination step, then each function accurately predicts the value , but the associated policies are not guaranteed to achieve this value. To overcome this issue, we use an on-demand exploration technique in the second phase of the algorithm (Algorithm 1, steps 5-8).
At each iteration of this phase, we select a policy and estimate its value via Monte Carlo sampling. If the policy has sub-optimal value, we invoke the DFS procedure on many of the paths visited. If the policy has near-optimal value, we have found a good policy, so we are done. This procedure requires an accurate estimate of the optimal value, which we already obtained by invoking the DFS routine at the root, since it guarantees that all surviving regressors agree with ’s value on the starting state distribution. ’s value is precisely the optimal value.
Intuition for On-demand Exploration: Running the elimination step at some path ensures that all surviving regressors take good actions at , in the sense that taking one action according to any surviving policy and then behaving optimally thereafter achieves near-optimal reward for path . This does not ensure that all surviving policies achieve near-optimal reward, because they may take highly sub-optimal actions after the first one. On the other hand, if a surviving policy visits only states for which the elimination step has been invoked, then it must have near-optimal reward. More precisely, letting denote the set of states for which the elimination step has been invoked (the “learned” states), we prove that any surviving satisfies
Thus, if is highly sub-optimal, it must visit some unlearned states with substantial probability. By calling DFS-Learn on the paths visited by , we ensure that the elimination step is run on at least one unlearned states. Since there are only distinct states and each non-terminal iteration ensures training on an unlearned state, the algorithm must terminate and output a near-optimal policy.
Computationally, the running time of the algorithm may be , since eliminating regression functions according to Eq. (4) may require enumerating over the class and the consensus function requires computing the maximum and minimum of numbers, one for each function. This may be intractably slow for rich function classes, but our focus is on statistical efficiency, so we ignore computational issues here.
Our main result certifies that LSVEE PAC-learns our models with polynomial sample complexity. [PAC bound] For any and under Assumptions 2.2, 2.2, and 2.2, with probability at least , the policy returned by LSVEE is at most -suboptimal. Moreover, the number of episodes required is at most
This result uses the notation to suppress logarithmic dependence in all parameters except for and . The precise dependence on all parameters can be recovered by examination of our proof and is shortened here simply for clarity. See Appendix C for the full proof of the result.
This theorem states that LSVEE produces a policy that is at most -suboptimal using a number of episodes that is polynomial in all relevant parameters. To our knowledge, this is the first polynomial sample complexity bound for reinforcement learning with infinite observation spaces, without prohibitively strong assumptions (e.g., [2, 22, 23]). We also believe this is the first finite-sample guarantee for reinforcement learning with general function approximation without prohibitively strong assumptions (e.g., ).
Since our model generalizes both contextual bandits and MDPs, it is worth comparing the sample complexity bounds.
In contextual bandits, we have so that the sample complexity of LSVEE is , in contrast with known results.
Both comparisons show our sample complexity bound may be suboptimal in its dependence on and . Looking into our proof, the additional factor of comes from collecting observations to estimate the value of future states, while the additional factor arises from trying to identify a previously unexplored state. In contextual bandits, these issues do not arise since there is only one state, while, in tabular MDPs, they can be trivially resolved as the states are observed. Thus, with minor modifications, LSVEE can avoid these dependencies for both special cases. In addition, our bound disagrees with the MDP results in the dependence on the policy complexity ; which we believe is unavoidable when working with rich observation spaces.
Finally, our bound depends on the number of states in the worst case, but the algorithm actually uses a more refined notion. Since the states are unobserved, the algorithm considers two states distinct only if they have reasonably different value functions, meaning learning on one does not lead to consensus on the other. Thus, a more distribution-dependent analysis defining states through the function class is a promising avenue for future work.
This paper introduces a new model in which it is possible to design and analyze principled reinforcement learning algorithms engaging in global exploration. As a first step, we develop a new algorithm and show that it learns near-optimal behavior under a deterministic-transition assumption with polynomial sample complexity. This represents a significant advance in our understanding of reinforcement learning with rich observations. However, there are major open questions:
Do polynomial sample bounds for this model with stochastic transitions exist?
Can we design an algorithm for learning this model that is both computationally and statistically efficient? The sample complexity of our algorithm is logarithmic in the size of the function class but uses an intractably slow enumeration of these functions.
Good answers to both of these questions may yield new practical reinforcement learning algorithms.
We thank Akshay Balsubramani and Hal Daumé III for formative discussions, and we thank Tzu-Kuo Huang and Nan Jiang for carefully reading an early draft of this paper. This work was carried out while AK was at Microsoft Research.
[Lower bound for best arm identification in stochastic bandits] For any and and any best-arm identification algorithm, there exists a multi-armed bandit problem for which the best arm is better than all others, but for which the estimate of the best arm must have unless the number of samples collected is at least .
The proof is essentially the same as the regret lower bound for stochastic multi-armed bandits from Auer et al. . Since we want the lower bound for best arm identification instead of regret, we include a full proof for completeness.
Following Auer et al. , the lower bound instance is drawn uniformly from a family of multi-armed bandit problems with arms each. There are problems in the family, and each one is parametrized by the optimal arm . For the problem, arm produces rewards drawn from while all other arms produce rewards from . Let denote the reward distribution for the bandit problem, so that and . Let denote the reward distribution where all arms receive rewards.
Since the environment is stochastic, any randomized algorithm is just a distribution over deterministic ones, and it therefore suffices to consider only deterministic algorithms. More precisely, a randomized algorithm uses some random bits and for each choice, the algorithm itself is deterministic. If we lower bound for all , then we also obtain a lower bound after taking expectation.
A deterministic algorithm can be specified as a sequence of mappings with the interpretation of as the estimate of the best arm. Note that is the first arm chosen, which does not depend on any of the observations. The algorithm can be specified this way since the sequence of actions played can be inferred by the sequence of observed rewards. Let denote the distribution over all rewards when is the optimal arm and actions are selected according to . We are interested in bounding the error event .
We first prove,
where is the number of times plays action over the course of rounds.
is a random variable since it depends on the sequence of observations, and here we take expectation with respect to.
To prove this statement, notice that,
The first inequality is by definition of the total variation distance, while the second is Pinsker’s inequality. We are left to bound the KL divergence. To do so, we introduce notation for sequences. For any , we use to denote the binary reward sequence of length . The KL divergence is
where is the chosen action at time
. To arrive at the second line we use the chain rule for KL-divergence. The third line is based on the fact that if, then the log ratio is zero, since the two conditional distributions are identical. Continuing with straightforward calculations, we have
This proves the sub-claim, which follows the same argument as as Auer et. al .
To prove the final result, we take expectation over the problem .
If then . This follows by the Taylor expansion of ,
The inequality here uses the assumption that .
Thus, whenever and , this number is smaller than , since we restrict to the cases where . This is the success probability, so the failure probability is at least , which proves the result. ∎
Here we design a family of POMDPs for both lower bounds. As with multi-armed bandits above, the lower bound will be realized by sampling a POMDP from a uniform distribution over this family of problems. Fixand and pick a single for each level so that for all pairs . For each level there are two states and for “good” and “bad.” The observation marginal distribution is concentrated on for each level , so the observations provide no information about the underlying state. Rewards for all levels except for are zero.
Each POMDPs in the family corresponds to a path . The transition function for the POMDP corresponding to the path is,
The reward is drawn from if the last state is and if the last action is . For all other outcomes the reward is drawn from . Observe that these models have deterministic transitions.
Clearly all of the models in this family are distinct, and there are such models. Moreover, since the observations provide no information and only the final reward is non-zero, no information is received until the full sequence of actions is selected. More formally, for any two policies , the KL divergence between the distributions of observations and rewards produced by the two policies is exactly the KL divergence between the final rewards produced by the two policies. Therefore, the problem is equivalent to a multi-armed bandit problem with arms, where the optimal arm gets a reward while all other arms get a reward. Thus, identifying a policy that is no-more than suboptimal in this POMDP is information-theoretically equivalent to identifying the best arm in the stochastic bandit problem in Theorem A with arms. Applying that lower bound gives a sample complexity bound of .
To verify both lower bounds in Propositions 2.1 and 2.1, we construct the policy and regressor sets. For Proposition 2.1, we need a set of reactive policies such that finding the optimal policy has a large sample complexity. To this end, we use the set of all mappings from the observations to actions. Specifically, each policy is identified with a sequence of actions and has . These policies are reactive by definition since they do not depend on any previous history, or state of the world. Clearly there are such policies, and each policy is optimal for exactly one POMDP defined above, namely is optimal for the POMDP corresponding to the path . Furthermore, in the POMDP defined by , we have , whereas for every other policy. Consequently, finding the best policy in the class is equivalent to identifying the best arm in this family of problems. Taking a uniform mixture of problems in the family as before, we reason that this requires at least trajectories.
For Proposition 2.1, we use a similar construction. For each path , we associate a regressor with,
Here we use to denote the history of the interaction, which can be condensed to a sequence of actions since the observations provide no information.
Clearly for the POMDP parameterized by , correctly maps the history to future reward, meaning that the POMDP is realizable for this regressor class. Relatedly, is the optimal policy for the POMDP with optimal sequence . Moreover, there are precisely regressors. As before, the learning objective requires identifying the optimal policy and hence the optimal path, which requires trajectories.
It is more natural to break the algorithm into more components for the analysis. This lets us focus on each component in isolation.
We first clarify some notation involving value functions. For predictor and policy , we use,
Recall that for all , which is a terminating state.
We often use a path as the first argument, with the convention that the associated state is the last one on the path. This is enabled by deterministic transitions. If a state is omitted from these functions, then it is assumed to be the start state or the root of the search tree. We also use for the optimal value, where by assumption we have . Finally, throughout the algorithm and analysis, we use Monte Carlo estimates of these quantities, which we denote as , etc.
Pseudocode for the compartmentalized version of the algorithm is displayed in Algorithm 4 with subroutines displayed as Algorithms 5, 6, 7, and 8. The algorithm should be invoked as where is the given class of regression functions, is the target accuracy and is the target failure probability. The two main components of the algorithm are the DFS-Learn and Explore-on-Demand routines. DFS-Learn ensures proper invocation of the training step, TD-Elim, by verifying a number of preconditions, while Explore-on-Demand finds regions of the search tree for which training must be performed.
It is easily verified that this is an identical description of the algorithm.
The proof of the theorem hinges on analysis of the the subroutines. We turn first to the TD-Elim routine, for which we show the following guarantee. Recall the definition,
[Guarantee for TD-Elim ] Consider running TD-Elim at path with regressors , parameters and with . Suppose that the following are true:
Estimation Precondition: We have access to estimates for all such that, .
Bias Precondition: For all