Information-Theoretic Considerations in Batch Reinforcement Learning

05/01/2019
by   Jinglin Chen, et al.
0

Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity ("why do we need them?") and the naturalness ("when do they hold?") of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/21/2020

Provably Efficient Reinforcement Learning with General Value Function Approximation

Value function approximation has demonstrated phenomenal empirical succe...
08/11/2020

Batch Value-function Approximation with Only Realizability

We solve a long-standing problem in batch reinforcement learning (RL): l...
03/25/2021

Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning

This paper considers batch Reinforcement Learning (RL) with general valu...
12/14/2020

Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL

Several practical applications of reinforcement learning involve an agen...
06/13/2021

Bellman-consistent Pessimism for Offline Reinforcement Learning

The use of pessimism, when reasoning about datasets lacking exhaustive e...
10/22/2020

What are the Statistical Limits of Offline RL with Linear Function Approximation?

Offline reinforcement learning seeks to utilize offline (observational) ...
12/09/2020

Semi-Supervised Off Policy Reinforcement Learning

Reinforcement learning (RL) has shown great success in estimating sequen...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Related Work

We are concerned with value-function approximation in batch-mode reinforcement learning, which is related to and sometimes known as Approximate Dynamic Programming (ADP; Bertsekas & Tsitsiklis, 1996). Such methods take sample transition data as input111In this paper, we restrict ourselves to the so-called one-path setting and do not allow multiple samples from the same state (Sutton & Barto, 1998; Maillard et al., 2010), which is only feasible in certain simulated environments and allows algorithms to succeed with realizability as the only representation condition. and approximate the optimal value-function from a restricted class that encodes one’s prior knowledge and inductive biases. They provide an important foundation for RL’s empirical success today, as many popular deep RL algorithms find their prototypes in this literature. For example, when DQN (Mnih et al., 2015) is run on off-policy data, and the target network is updated slowly, it can be viewed as the stochastic approximation of its batch analog, Fitted Q-Iteration (Ernst et al., 2005), with a neural net as the function approximator (Riedmiller, 2005; Yang et al., 2019).

Given the importance of these methods, the question of when they work is central to our understanding of RL. Existing works that analyze error propagation and finite sample behavior of ADP methods (Munos, 2003; Szepesvári & Munos, 2005; Antos et al., 2008; Munos & Szepesvári, 2008; Tosatto et al., 2017) have provided us with a decent understanding: To guarantee sample-efficient learning of near-optimal policies, we often need assumptions from the following two categories.

Mild distribution shift  Many ADP methods can run completely off-policy and they do the best with whatever data available.222Even when they are on-policy or combined with a standard exploration module (e.g., -greedy), most often they fail in problems where exploration is difficult (e.g., combination lock; see Kakade, 2003) and rely on the benignness of data to succeed. Therefore, it is necessary that the data have sufficient coverage over the state (and action) space.

Representation condition  Since the ultimate goal is to find

, we would expect that the function class we work with contains it (or at least a close approximation). While such realizability-type assumptions are sufficient for supervised learning, reinforcement learning faces the additional difficulties of delayed consequences and the lack of labels, and existing analyses often make stronger assumptions on the function class, such as (approximate) closedness under Bellman update

(Szepesvári & Munos, 2005).

While the above assumptions make intuitive sense, and finite sample bounds have been proved when they hold, their necessity (“can we prove similar results without making these assumptions?”) and naturalness (“do they actually hold in interesting problems?”) have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions. Below is a highlight of our results:

  1. To prepare for later discussions, we provide an analysis of representative ADP algorithms (FQI and its variant) under a simplified and minimal setup (Section 3). As a side-product, our results improve upon prior analyses in the dependence of error rate on sample size.

  2. We formally justify the necessity of mild distribution shift via an information-theoretic lower bound (Section 4.1). Our setup rules out trivial and uninteresting failure mode due to an adversarial choice of data: Even with the most favorable data distribution, polynomial sample complexity is not achievable if the MDP dynamics are not restricted.

  3. We conjecture an information-theoretic lower bound against realizability alone as the representation condition (Conjecture 8, Section 5.1). While we are not able to prove the conjecture, important steps are made, as two very general proof styles are shown to be destined to fail, one of which is due to Sutton & Barto (2018) and has been used to prove a closely related result.

  4. As another side-product, we prove that model-based RL can enjoy polynomial sample complexity with realizability alone (Corollary 6). If Conjecture 8 is true, we have a formal separation showing the gap between batch model-based vs value-based RL with function approximation (see the analog in the online exploration setting in Sun et al. (2019)).

Throughout the paper, we make novel connections to two subareas of RL: state abstractions (Whitt, 1978; Li et al., 2006) and PAC exploration under function approximation (Krishnamurthy et al., 2016; Jiang et al., 2017). In particular, we are able to utilize some of their results in our proofs (Sections 4.1 and 5.1), and find examples from these areas where the assumptions of interest hold (Sections 4.2 and 5.2). This suggests that the results in these other areas may be beneficial to the research in ADP, and we hope this work can inspire researchers from different subareas of RL to exchange ideas more often.

2 Preliminaries

2.1 Markov Decision Processes (MDPs)

Let be an MDP, where is the finite (but can be arbitrarily large) state space, is the finite action space, is the transition function (

is the probability simplex),

is the reward function, is the discount factor, and is the initial distribution over states.

A (stochastic) policy prescribes a distribution over actions for each state. Fixing a start state , the policy induces a random trajectory , where , , , , , etc. The goal is to find that maximizes the expected return . It will also be useful to define the value function and Q-value function , and these functions take values in with .

There exists a deterministic policy333A deterministic policy puts all the probability mass on a single action in each state. With a slight abuse of notation, we sometimes also treat the type of such policies as . that maximizes for all simultaneously, and hence also maximizes as . Let and be the shorthand for and respectively. It is well known that , and satisfies the Bellman equation , where is the Bellman update operator: ,

(1)

where .

Additional notations  Let be the marginal distribution of under , that is, . For , , and , define the shorthand , which is a semi-norm. Furthermore, for any object that is a function of/distribution over (or

), we will treat it as a vector whenever convenient. We add a subscript to the value functions or Bellman update operators, e.g.,

, when it is necessary to clarify the MDP in which the object is defined.

2.2 Batch Value-Function Approximation

This paper is concerned with batch-mode RL with value-function approximation. As a typical setup, the agent does not have direct access to the MDP and instead is given the following inputs:

  • A batch dataset consisting of tuples, where and . For simplicity, we assume that is generated i.i.d. from the data distribution .444The agent may or may not have knowledge of . Most existing algorithms are agnostic to such knowledge.

  • A class of candidate value-functions, , which (approximately) captures ; such a property is often called realizability. We discuss additional assumptions on later. As a further simplification, we focus on finite but exponentially large and discuss how to handle infinite classes when appropriate.

The learning goal is to compute a near-optimal policy from the data, often via finding that approximates and outputting , the greedy policy w.r.t. . A representative algorithm for this setting is Fitted Q-Iteration (FQI) (Ernst et al., 2005; Szepesvári, 2010).555Batch value-based algorithms can often be categorized into approximate value iteration (e.g., FQI) and approximate policy iteration (e.g., LSPI (Lagoudakis & Parr, 2003)). We focus on the former due to its simplicity and do not discuss the latter as its guarantees often rely on similar but more complicated assumptions (Lazaric et al., 2012). Moreover, our lower bounds are information-theoretic and algorithm-independent. The algorithm initializes arbitrarily, and iteratively computes as follows: in iteration , the algorithm converts the dataset into a regression dataset, with being the input and as the output. It then minimizes the squared loss regression objective over , and the minimizer becomes . More formally, , where

(2)

FQI may oscillate and a fixed point solution may not exist in general (Gordon, 1995). Nevertheless, under conditions which we will specify later, finite sample guarantees for FQI can still be obtained even if the process does not converge.

2.3 State Abstractions

A state abstraction maps to a finite and potentially much smaller abstract state space, . Naturally, is often a many-to-one mapping, inducing an equivalence notion over which encodes one’s prior knowledge of equivalent or similar states. A typical use of abstractions in the batch learning setting is to construct a tabular (or certainty-equivalent) model from a dataset , and compute the optimal policy in the resulting abstract model. There is a long history of studying abstractions, mostly focusing on their approximation guarantees (Whitt, 1978).

We note, however, that there is a direct connection between FQI and certainty-equivalence with abstractions. In particular, value iteration in the model estimated with abstraction

is exactly equivalent to FQI with being the class of piece-wise constant functions under .666This result is known anecdotally (see e.g., Pires & Szepesvári, 2016) and we include details in Appendix E for completeness. As such, the characterization of approximation errors in the two bodies of literature are closely related to each other. We will discuss further connections in the rest of this paper.

3 Bellman Error Minimization in Batch Reinforcement Learning

In this section, we give a complete analysis of FQI and a related algorithm, with the main results being two sample complexity bounds. Many of the insights and results in this section have either explicitly appeared in or been implicitly hinted by prior work (especially Szepesvári & Munos, 2005; Antos et al., 2008), and we include them because (1) the discussions in the rest of the paper are largely based on these results, and (2) our analyses simplify prior results without trivializing them, making the high-level insights more accessible. We also improve the results in some aspects.

3.1 Sample-Based Bellman Error Minimization

We start by deriving FQI from a slightly unusual perspective due to the aforementioned prior work, which motivates major assumptions in FQI analysis and introduces concepts that are important for later discussions.

Recall that the goal of value-based RL is to find such that , that is, where is some appropriate norm. For example, if is a distribution supported on the entire , then would guarantee that . While such an can be found in principle by minimizing over , calculating requires knowledge of the transition dynamics (recall Eq.(1)), which is unknown in the learning setting. Instead, we have access to the dataset , and it may be tempting to minimize the following objective that is purely a function of data: (Recall in Eq.(2))

Unfortunately, even with the infinite amount of data, the above objective is still different from the actual Bellman error that we wish to minimize. In particular, define where the expectation is w.r.t. the random draw of the dataset . We have

(3)

In words,

adds a conditional variance term to the desired objective, which incorrectly penalizes functions that have a large variance w.r.t. random state transitions.

The minimax algorithm 777Also known under the name “modified Bellman Residual Minimization” (Antos et al., 2008). One way to fix the issue is to estimate the conditional variance term in Eq. (3) and subtracting it from . In fact, it is easy to verify that is the Bayes optimal error of the regression problem

(4)

One can estimate it by empirical risk minimization over a rich function class, and the estimate is consistent as long as the function class realizes the Bayes optimal regressor and has bounded statistical complexity. Following this idea, we assume access to another function class for solving the regression problem in Eq.(4). The estimated Bayes optimal error is

(5)

A good approximation to from data is then This suggests that we can simply run the following optimization problem to find that approximates :

(6)

Later in this section, we will provide a finite sample analysis of the above minimax algorithm, but before that, we will show that FQI can be viewed as its approximation.

FQI as an approximation to Eq.(6)  FQI has a close connection to the above program and can be viewed as its approximation, when is chosen to be . Formally,

Proposition 1.

Let , be the solution to Eq.(6) when .

  • If , is a fixed point for FQI.

  • Conversely, if holds for some in FQI, then is a solution to Eq.(6).

  • If , FQI oscillates and no fixed point exists.

The proof is deferred to Appendix A. The proposition states that the minimax algorithm is more stable than FQI, and when FQI reaches a fixed point, the solutions of the two algorithms coincide. In fact, Dai et al. (2018) derives a closely related algorithm using Fenchel dual and shows that the algorithm is always convergent.

3.2 Analysis of FQI and Its Minimax Variant

We provide finite sample guarantees to the two algorithms introduced above; closely related analyses have appeared in prior works (see Section 1 for references), and our version provides a cleaner analysis under simplification assumptions, improves the error rate as a function of sample size, and prepares us for later discussions.

To state the guarantees, we need to introduce the two assumptions that are core to this paper. The first assumption handles distribution shift, and we precede it with the definition of admissible distributions.

Definition 1 (Admissible distributions).

We say a distribution is admissible in MDP , if there exists , and a (potentially non-stationary and stochastic) policy , such that .

Intuitively, a distribution is admissible if it can be generated in the MDP by following some policy for a number of timesteps. The following assumption on concentratability asserts that all admissible distributions are not “far away” from the data distribution . The original definition is due to Munos (2003).

Assumption 1 (Concentratability coefficient).

We assume that there exists s.t. for any admissible ,

The real (and implicit) assumption here is that is manageably large, as our sample complexity bounds scale linearly with . Prior works have used more sophisticated definitions (Farahmand et al., 2010).888This often comes at the cost of their bound being not a priori, i.e., having a dependence on the randomness of data, initialization, and tie-breaking in optimization. The technicalities introduced are largely orthogonal to the discussions in this paper, so we choose to adopt a much simplified version. Despite the simplification, we will see natural examples that yield small under our definition in Section 4. We will also discuss how to relax it using the structure of at the end of the paper.

Next, we introduce the assumption on the representation power of and .

Assumption 2 (Realizability).

.
(When this holds approximately, we measure violation by .)

Assumption 3 (Completeness).

, .
(When this holds approximately, we measure violation by .)

These assumptions lead to finite sample guarantees for both the minimax algorithm and FQI. For FQI, since , Assumption 3 essentially states that is closed under operator , hence “completeness”.999In the literature, the violation of completeness when , , is called inherent Bellman error. The assumption is natural from how we derive the minimax algorithm in Sec 3.1, as Eq.(5) is only a consistent estimate of the Bayes optimal error of Eq.(4) if realizes the Bayes optimal regressor, which is .

A few remarks in order:

  1. When is finite, completeness implies realizability.101010This is because never repeats itself, as its distance to shrinks exponentially with a rate of due to contraction. However, completeness is stronger and much less desired than realizability: realizability is monotone in (adding functions to never hurts realizability), while completeness is not (adding functions to may break completeness).

  2. While we focus on completeness, it is not the only condition that leads to guarantees for ADP algorithms. We discuss alternative assumptions in Section 6.

Now we are ready to state the sample complexity results. In Appendices C and D we provide more general error bounds (Theorems 11 and 17) that handle the approximate case where and are not zero and iteration is finite. To keep the main text focused and accessible, we only present their sample complexity corollaries in the exact case.

Theorem 2 (Sample complexity of FQI).

Given a dataset with sample size and that satisfies completeness (Assumption 3 when ), w.p. , the output policy of FQI after iterations, , satisfies when and111111Only absolute constants are suppressed in Big-Oh notations.

Theorem 3 (Sample complexity of the minimax variant).

Given a dataset with sample size and , that satisfy realizability (Assumption 2) and completeness (Assumption 3) respectively, w.p. , the output policy of the minimax algorithm (Eq.(6)), , satisfies , if

Our results show that the suboptimality decreases in the rate of when realizability and completeness hold exactly, and the more general error bounds (Theorems 11 and 17) degrade gracefully from the exact case as (or and ) increases. This is obtained via the use of Bernstein’s inequality to achieve fast rate in least square regression. While results similar to Theorems 2 and 11 exist (Farahmand 2011, Chapter 5; see also Lazaric et al. (2012); Pires & Szepesvári (2012); Farahmand et al. (2016)), according to our knowledge, fast rate for the minimax algorithm has not been established before: for example, Antos et al. (2008); Munos & Szepesvári (2008) obtain an error rate of in closely related settings, but their rates do not improve to in the absence of approximation.121212Note however that they handle infinite function classes. In fact, Munos & Szepesvári (2008, pg.831) have discussed the possibility of an result, which we obtain here. See the beginning of Appendix C for further discussions. The major limitation of our result is the assumption of finite and due to our minimal setup, and we refer readers to Yang et al. (2019)

for a recent analysis that specializes in ReLU networks.

131313Their analysis modifies the FQI algorithm and samples fresh data in each iteration, dodging some of the technical difficulties due to reusing the same batch of data, which we handle here.

We do not discuss the proofs in further details since the improvement in error rate is a side-product and this section is mainly meant to simplify prior analyses and provide a basis for subsequent discussions. Interested readers are invited to consult Appendices C and D where we provide sketched outlines as well as detailed proofs.

4 On Concentratability

In this section, we establish the necessity of Assumption 1 and show natural examples where concentratability is low. While it is easy to construct a counterexample of missing data141414 That is, puts probability on important states and actions. against removing Assumption 1, such a counterexample only reflects a trivial failure mode due to an adversarial choice of data. What we show is a deeper and nontrivial failure mode: Even with the most favorable data distribution, polynomial sample complexity is precluded if we put no restriction on MDP dynamics. This result improves our understanding on concentratability, and shows that this assumption is not only about the data distribution, but also (and perhaps more) about the environment and the state distributions induced therein.

4.1 Lower Bound

To show that low concentratability is necessary, we prove a hardness result, where both realizability and completeness hold, and an algorithm has the freedom to choose any data distribution that is favorable, yet no algorithm can achieve sample complexity. Crucially, the concentratability coefficient of any data distribution on the worst-case MDP is always exponential in horizon, so the lower bound does not conflict with the upper bounds in Section 3, as the exponential sample complexity would have been explained away by the dependence on .

Theorem 4.

There exists a family of MDPs (they share the same , , ), that realizes the of every MDP in the family, and that realizes for any and any , such that: for any data distribution and any batch algorithm with as input, an adversary can choose an MDP from the family, such that the sample complexity for the algorithm to find an -optimal policy cannot be poly().

Proof.

We construct , a family of hard MDPs, and prove the theorem via the combination of two arguments:

  1. All algorithms are subject to an exponential lower bound (w.r.t. the horizon) even if (a) they have compact and that satisfy realizability and completeness as inputs, and (b) they can perform exploration during data collection.

  2. Since the MDPs in the construction share the same deterministic transition dynamics, the combination of any data distribution and any batch RL algorithm is a special case of an exploration algorithm.

We first provide argument (1), which reuses the construction by Krishnamurthy et al. (2016). Let each instance of be a complete tree with branching factor and depth . Transitions are deterministic, and only leaf nodes have non-zero rewards. All leaves give Ber rewards, except for one that gives Ber(). Changing the position of this optimal leaf yields a family of MDPs, and in order to achieve a suboptimality that is a constant fraction of , the algorithm is required to identify this optimal leaf.151515All leaf rewards are discounted by only a constant when , as . In fact, the problem is equivalent to the hard instances of best arm identification with arms, so even if an algorithm can perform active exploration, the sample complexity is still (see Krishnamurthy et al. (2016) for details, who use standard techniques from Auer et al. (2002)).

Now we provide and that (1) satisfy Assumptions 2 and 3, (2) do not provide any information other than the fact that the problem is in , and (3) have “small” logarithmic sizes so that and cannot explain away the exponential sample complexity. Let , where the subscript specifies the MDP with respect to which we compute . Let . Such and satisfy realizability and completeness by definition, and have statistical complexities and , respectively. With this, we conclude that any exploration algorithm cannot obtain sample complexity.

We complete the proof with the second argument. Note that all the MDPs in only differ in leaf rewards and share the same deterministic transition dynamics. Therefore, a learner with the ability to actively explore can mimic the combination of any data distribution and any batch RL algorithm, by (1) collecting data from (which is always doable due to known and deterministic transitions), and (2) running the batch algorithm after data is collected. This completes the proof. ∎

4.2 Natural Examples

We have shown that polynomial learning is precluded if no restriction is put on the MDP dynamics, even if data is chosen in a favorable manner. The next question is, is low concentratability common, or at least found in interesting problems? In general, even if the data distribution is uniform over the state-action space, the worst-case might still scale with , which can be too large in challenging RL problems for the guarantees to be any meaningful. To this end, Munos (2007) has provided several carefully constructed tabular examples, demonstrating that does not always scale badly. However, are there more general problem families that capture RL scenarios found in empirical work, yet always yield a bounded ?

Example in problems with rich observations  We find answers to the above problem in recent development of PAC exploration in rich-observation problems (Krishnamurthy et al., 2016; Jiang et al., 2017; Dann et al., 2018), where a general low-rank condition (a.k.a. Bellman rank (Jiang et al., 2017)) has been identified that enables sample-efficient exploration under function approximation. One of the prominent examples where such a condition holds is inspired by “visual gridworld” environments in empirical RL research (see e.g., Johnson et al., 2016): the dynamics are defined over a small number of hidden states (e.g., grids), and the agent receives high dimensional observations that are generated i.i.d. from the hidden states (e.g., raw-pixel images as observations). Below we show that in these environments, there always exists a data distribution that yields small for batch learning, and such a distribution can be naturally generated as a mixture of admissible distributions. We include an informal statement below, deferring the precise version and the proof to Appendix B.

Proposition 5 (Informal).

Let be a reactive POMDP as defined in Jiang et al. (2017), where the underlying hidden state space is finite but the (Markov) observation space can be arbitrarily large. There always exists a state-action distribution such that satisfies Assumption 1. Furthermore, can be obtained by taking a probability mixture of several admissible distributions.

Similar results can be established for other structures studied by Jiang et al. (2017) (e.g., large MDPs with low-rank transitions), which we omit here. These results suggest that Bellman rank is the counterpart for concentratability coefficient in the online exploration setting. Further implications and how to leverage this connection to improve the definition of concentratability will be discussed in Section 6.

5 On Completeness

5.1 Towards an Information-Theoretic Lower Bound in the Absence of Completeness

We would also like to establish the necessity of completeness by showing that, there exist hard MDPs that cannot be efficiently learned with value-function approximation, even under low concentratability and realizability (Assumptions 1 and 2).161616Note that the existence of such a lower bound would not imply that completeness is indispensable. Rather it simply states that realizability alone is insufficient, and we need stronger conditions on , for which completeness is a candidate. In fact, algorithm-specific hardness results have been known for a long time (see e.g., Van Roy, 1994; Gordon, 1995; Tsitsiklis & Van Roy, 1997), where ADP algorithms are shown to diverge even in MDPs with a small number of states, when the algorithm is forced to work with a restricted class of functions.171717Interested readers can consult Agrawal (2018). See also Dann et al. (2018, Theorem 45) for a more plain example. Unfortunately, such hardness results are insufficient to confirm the fundamental difficulty of the problem, and it is important to seek information-theoretic lower bounds.

While we are not able to obtain such a lower bound, what we find is that the counterexample (if it exists) must be highly nontrivial and probably need ideas that are not present in standard statistical learning theory (SLT) and RL literature. More concretely, we show that two general proof styles are destined to fail in such a task, as polynomial sample complexity can be achieved information-theoretically.

Exponential-sized model family will not work  Standard lower bounds in SLT often start with the construction of a family of problem instances that has an exponential size (Yu, 1997).181818In fact, our Theorem 4 also follows this style, whose construction is due to Krishnamurthy et al. (2016); Jiang et al. (2017). We show that this will simply never work, which is a direct corollary of Theorem 3:

Corollary 6 (Batch model-based RL only needs realizability).

Let be a dataset with sample size , as defined in Assumption 1, and a model class that realizes the true MDP , i.e., . There exists an (information-theoretic) algorithm that takes as input and return an -optimal policy w.p. , if

Proof.

We use the same idea as the proof of Theorem 4: Let , and . Note that , and . satisfy both realizability and completeness, so we apply the minimax algorithm (Eq.(6)) and the guarantee in Theorem 3 immediately holds. ∎

Essentially, this result shows that batch model-based RL can succeed with realizability as the only representation condition for the model class, because we can reduce it to value-based learning and obtain completeness for free. This illustrates a significant barrier to an algorithm-independent lower bound, that in an information-theoretic setting, the learner can always specialize in the family of hard instances and have the freedom to choose its algorithm style, thus can be model-based. However, in the context of value-function approximation, it is obvious that we are assuming no prior knowledge of the model class and hence cannot run any model-based algorithm. How can we encode such a constraint mathematically?

Tabular MDPs with a restricted value-function class will not work  Sutton & Barto (2018, Section 11.6) proposes a clever way to prevent the learner to be model-based for linear function approximation, and a closely related definition is recently given by Sun et al. (2019) that applies to arbitrary function classes.

The idea is the following: Instead of providing the dataset directly, we preprocess the data and mask the identity of (and ). While is not directly observable, the learner can query the evaluation of any on for any . That is, we represent each state by its value profile, . This definition agrees with intuition and can be used to express a wide range of popular algorithms, including FQI.

Using this definition, Sutton & Barto (2018) proves a result closely related to what we aim at here: they show that the Bellman error is not learnable. In particular, there exist two MDPs (with finite and constant-sized state space) and a value function, such that (1) a value-based learner (who only has access to the value profiles of states) cannot distinguish between the data coming from the two MDPs, and (2) the Bellman error of the value function is different in the two MDPs.

While encouraging and promising, their constructions have a crucial caveat for our purpose, that the value function class is not realizable.191919They force two states who have different optimal values to share the same features for linear function approximation. With further investigation, we sadly find that such a caveat is fundamental: no information-theoretic lower bound can be shown if realizability holds in naïve tabular constructions with a constant-sized state-action space and uniform data, hence value profile cannot be the only mechanism to induce hardness. In fact, we can prove a stronger result than we need here for and that are not necessarily constant-sized:

Proposition 7.

Let be an MDP with a finite state space and a realizable function class. Given a dataset where each receives samples, there exists an algorithm that only operates on states via their value profiles yet enjoy poly sample complexity.

Proof Sketch.

(See full proof in Appendix F.) If every has a unique value profile, the state is perfectly decodable and thus one can simply compute the optimal policy of the certainty-equivalent model. If a set of states share exactly the same value profile—and w.l.o.g. let’s consider 2 states, and —realizability implies that , . Now consider the algorithm that treat all states with the same value profile as the same state, which essentially uses a state abstraction that is -irrelevant (Li et al., 2006). It is known that certainty-equivalence with -irrelevant abstraction is consistent and enjoys polynomial sample complexity when each state-action pair receives enough data (Li, 2009; Hutter, 2014; Jiang et al., 2015; Abel et al., 2016; Jiang, 2018). ∎

Given that we fail to obtain the lower bound, a conjecture is made below and we hope to resolve it in future work.

Conjecture 8.

There exists a family of MDPs that share the same , , and , such that: any algorithm with as input that can only access states via value profiles cannot have poly() sample complexity.

5.2 Connection to Bisimulation

As the last piece of technical result of this paper, we show that when is a space of piece-wise constant functions under a partition induced by state abstraction , the notion of completeness (Assumption 3, ) is exactly equivalent to a long-studied type of abstractions, known as bisimulation (Whitt, 1978; Even-Dar & Mansour, 2003; Ravindran, 2004; Li et al., 2006).

Definition 2 (Bisimulation).

An abstraction is a bisimulation in an MDP , if where (i.e., they are aggregated), and for all , .

Definition 3 (Piece-wise constant function class).

Given an abstraction , define as the set of all functions that are piece-wise constant under . That is, where we have , .

Proposition 9.

is bisimulation satisfies completeness (Assumption 3 with ).

The “” part is trivial, but the “” part is less obvious. The proof shows that if is not a bisimulation, we can find either to witness the reward error or the transition error, and in the latter case, the choice of achieves the maximum discrepancy in an integral probability metric (Müller, 1997) interpretation of the bisimulation condition on transition dynamics. Details are provided in Appendix E, where we prove a stronger result that relates the approximation error of bisimulation to the violation of completeness.

6 Discussions and Related Work

In this paper, we examine the common assumptions that enable finite sample guarantees for value-function approximation methods. Concretely, we provide an information-theoretic lower bound in Section 4.1, showing that not constraining the concentratability coefficient immediately precludes sample-efficient learning even with benign data. We also introduce a general family of problems of interest in empirical RL that yield low concentratability (Section 4.2).

In comparison, the necessity of completeness is still a mystery, and our investigation in Section 5.1 mostly shows the highly nontrivial nature of the lower bound (assuming it exists) as we eliminate two general proof styles. We hope these negative results can guide the search for novel constructions that reflect the fundamental difficulties of reinforcement learning in the function approximation setting.

We conclude the paper with some discussions.

Alternative assumptions to completeness  As we note in Section 5.1, even if Conjecture 8 is true, it would not imply that completeness is absolutely necessary, as other assumptions may also break the lower bound. Furthermore, additional assumptions are not necessarily made on the value-function class (e.g., that being a contraction (Gordon, 1995; Szepesvári & Smart, 2004; Lizotte, 2011; Pires & Szepesvári, 2016)), and can instead take the form of requiring another function class to realize other objects of interest, such as state distributions (Chen et al., 2018; Liu et al., 2018). Regardless, all of these approaches face the same fundamental question on the necessity of the additional/stronger assumptions being made, to which our Conjecture 8 is an important piece if not the final answer. We hope to resolve this important open question in the future.

Related work that has not been covered  The conjectured insufficiency of realizability (Conjecture 8) is related to various undesirable phenomena in learning with bootstrapped targets, which has been of constant interest to RL researchers (Sutton, 2015; Van Hasselt et al., 2018; Lu et al., 2018). As far as we know, all existing efforts that investigate this issue are algorithm-specific (apart from Sutton & Barto (2018, Section 11.6) and the references therein, which has been discussed in Section 5.1), and our information-theoretic perspective is novel.

Relaxation of Assumption 1 using the structure of   The concentratability coefficient is defined as a function of the MDP, even in its most complicated version (Farahmand et al., 2010). In Section 4.2 we discover a connection to Bellman rank (Jiang et al., 2017), which can be viewed as its counterpart for online exploration. Interestingly, Bellman rank depends both on the environmental dynamics and the function class , and in some cases, the latter dependence is crucial to obtaining low-rankness (e.g., for Linear Quadratic Regulators; see their Proposition 5). Similarly, we may improve the definition of concentratability and make it more widely applicable by incorporating into the definition. In Appendix G, we discuss some preliminary ideas based on the theoretical results in this paper.

Acknowledgements

We gratefully thank the constructive comments from Alekh Agarwal and Anonymous Reviewer #3.

References

  • Abel et al. (2016) Abel, D., Hershkowitz, D. E., and Littman, M. L. Near optimal behavior via approximate state abstraction. In

    Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48

    , pp. 2915–2923. JMLR. org, 2016.
  • Agrawal (2018) Agrawal, S. IEOR 8100: Reinforcement Learning. Lecture 4: Approximate Dynamic Programming. Columbia University, 2018. https://ieor8100.github.io/rl/docs/Lecture%204%20-%20approximate%20DP.pdf.
  • Antos et al. (2008) Antos, A., Szepesvári, C., and Munos, R. Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, 2008.
  • Auer et al. (2002) Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
  • Bertsekas & Tsitsiklis (1996) Bertsekas, D. P. and Tsitsiklis, J. N. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996.
  • Chen et al. (2018) Chen, Y., Li, L., and Wang, M. Scalable bilinear learning using state and action features. arXiv preprint arXiv:1804.10328, 2018.
  • Dai et al. (2018) Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J., and Song, L. Sbeed: Convergent reinforcement learning with nonlinear function approximation. In International Conference on Machine Learning, pp. 1133–1142, 2018.
  • Dann et al. (2018) Dann, C., Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J., and Schapire, R. E. On Oracle-Efficient PAC RL with Rich Observations. In Advances in Neural Information Processing Systems, pp. 1429–1439, 2018.
  • Ernst et al. (2005) Ernst, D., Geurts, P., and Wehenkel, L. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503–556, 2005.
  • Even-Dar & Mansour (2003) Even-Dar, E. and Mansour, Y.

    Approximate equivalence of Markov decision processes.

    In Learning Theory and Kernel Machines, pp. 581–594. 2003.
  • Farahmand (2011) Farahmand, A.-m. Regularization in reinforcement learning. 2011.
  • Farahmand et al. (2010) Farahmand, A.-m., Szepesvári, C., and Munos, R. Error Propagation for Approximate Policy and Value Iteration. In Advances in Neural Information Processing Systems, pp. 568–576, 2010.
  • Farahmand et al. (2016) Farahmand, A.-m., Ghavamzadeh, M., Szepesvári, C., and Mannor, S. Regularized policy iteration with nonparametric function spaces. The Journal of Machine Learning Research, 17(1):4809–4874, 2016.
  • Gordon (1995) Gordon, G. J. Stable function approximation in dynamic programming. In Proceedings of the twelfth international conference on machine learning, pp. 261–268, 1995.
  • Hutter (2014) Hutter, M. Extreme state aggregation beyond mdps. In International Conference on Algorithmic Learning Theory, pp. 185–199. Springer, 2014.
  • Jiang (2018) Jiang, N. CS 598: Notes on State Abstractions. University of Illinois at Urbana-Champaign, 2018. http://nanjiang.cs.illinois.edu/files/cs598/note4.pdf.
  • Jiang et al. (2015) Jiang, N., Kulesza, A., and Singh, S. Abstraction Selection in Model-based Reinforcement Learning. In Proceedings of the 32nd International Conference on Machine Learning, pp. 179–188, 2015.
  • Jiang et al. (2017) Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J., and Schapire, R. E. Contextual Decision Processes with low Bellman rank are PAC-learnable. In International Conference on Machine Learning, 2017.
  • Johnson et al. (2016) Johnson, M., Hofmann, K., Hutton, T., and Bignell, D.

    The malmo platform for artificial intelligence experimentation.

    In International joint conference on artificial intelligence (IJCAI), pp. 4246, 2016.
  • Kakade & Langford (2002) Kakade, S. and Langford, J. Approximately Optimal Approximate Reinforcement Learning. In Proceedings of the 19th International Conference on Machine Learning, volume 2, pp. 267–274, 2002.
  • Kakade (2003) Kakade, S. M. On the sample complexity of reinforcement learning. PhD thesis, University of College London, 2003.
  • Krishnamurthy et al. (2016) Krishnamurthy, A., Agarwal, A., and Langford, J. PAC reinforcement learning with rich observations. In Advances in Neural Information Processing Systems, pp. 1840–1848, 2016.
  • Lagoudakis & Parr (2003) Lagoudakis, M. G. and Parr, R. Least-squares policy iteration. The Journal of Machine Learning Research, 4:1107–1149, 2003.
  • Lazaric et al. (2012) Lazaric, A., Ghavamzadeh, M., and Munos, R. Finite-sample analysis of least-squares policy iteration. The Journal of Machine Learning Research, 13(1):3041–3074, 2012.
  • Li (2009) Li, L. A unifying framework for computational reinforcement learning theory. PhD thesis, Rutgers, The State University of New Jersey, 2009.
  • Li et al. (2006) Li, L., Walsh, T. J., and Littman, M. L. Towards a unified theory of state abstraction for MDPs. In Proceedings of the 9th International Symposium on Artificial Intelligence and Mathematics, pp. 531–539, 2006.
  • Liu et al. (2018) Liu, Q., Li, L., Tang, Z., and Zhou, D. Breaking the curse of horizon: Infinite-horizon off-policy estimation. In Advances in Neural Information Processing Systems, pp. 5361–5371, 2018.
  • Lizotte (2011) Lizotte, D. J. Convergent fitted value iteration with linear function approximation. In Advances in Neural Information Processing Systems, pp. 2537–2545, 2011.
  • Lu et al. (2018) Lu, T., Schuurmans, D., and Boutilier, C. Non-delusional q-learning and value-iteration. In Advances in Neural Information Processing Systems, pp. 9971–9981, 2018.
  • Maillard et al. (2010) Maillard, O.-A., Munos, R., Lazaric, A., and Ghavamzadeh, M. Finite-sample analysis of bellman residual minimization. In Proceedings of 2nd Asian Conference on Machine Learning, pp. 299–314, 2010.
  • Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • Müller (1997) Müller, A. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429–443, 1997.
  • Munos (2003) Munos, R. Error bounds for approximate policy iteration. In ICML, volume 3, pp. 560–567, 2003.
  • Munos (2007) Munos, R. Performance bounds in l_p-norm for approximate value iteration. SIAM journal on control and optimization, 46(2):541–561, 2007.
  • Munos & Szepesvári (2008) Munos, R. and Szepesvári, C. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(May):815–857, 2008.
  • Pires & Szepesvári (2012) Pires, B. A. and Szepesvári, C. Statistical linear estimation with penalized estimators: an application to reinforcement learning. arXiv preprint arXiv:1206.6444, 2012.
  • Pires & Szepesvári (2016) Pires, B. Á. and Szepesvári, C. Policy error bounds for model-based reinforcement learning with factored linear models. In Conference on Learning Theory, pp. 121–151, 2016.
  • Ravindran (2004) Ravindran, B. An algebraic approach to abstraction in reinforcement learning. PhD thesis, University of Massachusetts Amherst, 2004.
  • Riedmiller (2005) Riedmiller, M. Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317–328. Springer, 2005.
  • Singh & Yee (1994) Singh, S. and Yee, R. An upper bound on the loss from approximate optimal-value functions. Machine Learning, 16(3):227–233, 1994.
  • Sun et al. (2019) Sun, W., Jiang, N., Krishnamurthy, A., Agarwal, A., and Langford, J. Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches. In Conference on Learning Theory, 2019.
  • Sutton (2015) Sutton, R. Introduction to reinforcement learning with function approximation. In Tutorial at the Conference on Neural Information Processing Systems, 2015.
  • Sutton & Barto (1998) Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, March 1998. ISBN 0-262-19398-1.
  • Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.
  • Szepesvári (2010) Szepesvári, C. Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning, 4(1):1–103, 2010.
  • Szepesvári & Munos (2005) Szepesvári, C. and Munos, R. Finite time bounds for sampling based fitted value iteration. In Proceedings of the 22nd international conference on Machine learning, pp. 880–887. ACM, 2005.
  • Szepesvári & Smart (2004) Szepesvári, C. and Smart, W. D. Interpolation-based q-learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 100. ACM, 2004.
  • Tosatto et al. (2017) Tosatto, S., Pirotta, M., D’Eramo, C., and Restelli, M. Boosted fitted q-iteration. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3434–3443. JMLR. org, 2017.
  • Tsitsiklis & Van Roy (1997) Tsitsiklis, J. N. and Van Roy, B. An analysis of temporal-difference learning with function approximation. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 42(5), 1997.
  • Van Hasselt et al. (2018) Van Hasselt, H., Doron, Y., Strub, F., Hessel, M., Sonnerat, N., and Modayil, J. Deep reinforcement learning and the deadly triad. arXiv preprint arXiv:1812.02648, 2018.
  • Van Roy (1994) Van Roy, B. Feature-based methods for large scale dynamic programming. PhD thesis, Massachusetts Institute of Technology, 1994.
  • Whitt (1978) Whitt, W. Approximations of dynamic programs, I. Mathematics of Operations Research, 3(3):231–243, 1978.
  • Yang et al. (2019) Yang, Z., Xie, Y., and Wang, Z. A Theoretical Analysis of Deep Q-Learning. arXiv preprint arXiv:1901.00137, 2019.
  • Yu (1997) Yu, B. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pp. 423–435. Springer, 1997.

Appendix A Proof of Proposition 1

Claim 1:

Since , we have that, ,

Therefore, , and the condition gives us that . From the definition, we know that . Hence , which means and is a fixed point for FQI.

Claim 2:

Since , for any , we can always choose . Therefore, for any , , which further means , and the value of the optimization problem is non-negative. If we have for some in FQI, then we know that , and . This tells us that and achieve the optimal value, so is a solution to Eq.(6).

Claim 3:

Prove by contradiction. If FQI does not oscillate and a fixed point of FQI is , the previous result gives us that is a solution to Eq.(6), with the minimax objective value being . Contradiction.

Appendix B Example of Low Concentratability in Rich-Observation Problems

Definition 4 (Reactive POMDPs (Jiang et al., 2017)).

A reactive POMDP is a decision process specified by a finite hidden state space , an (arbitrarily large) observation space , an action space , hidden state dynamics , an initial hidden state distribution , an emission process , a reward function , and a discount factor . A trajectory is generated as , , , , , where the hidden states ’s are not observable to the agent. Moreover, the function of this POMDP is assumed to only depend on the last observation , hence “reactive” POMDPs. We make a further simplification by assuming that the observations are indeed Markov (which implies reactive ).

Proposition 10 (Formal version of Proposition 5).

Let the environment be a reactive POMDP as defined above, where the underlying hidden state space is finite. The (Markov) observation space is finite but can be arbitrarily large. Assume that the number of admissible distributions is finite,202020This assumption is only introduced to get around of some technical subtleties, and the resulting upper bound on has no dependence on the number of admissible distributions. there exists a distribution that can be expressed as a mixture of admissible distributions (more accurately, their marginals over states), such that when is used as the data distribution (recall the definition of in Assumption 1).

Proof.

The proof contains two parts: the first part shows that a certain matrix consisting of admissible distributions has low rank, and the second part exploits the low-rankness to construct the mixture distribution described in the proposition statement and shows that it yields low concentratability coefficient .

By definition, an admissible (state-action) distribution takes the form of , that is the distribution over state-action pairs induced by rolling into time step with policy . Let denote the corresponding marginal probability over states, which we call an admissible state distribution. Note that .

Let there be a total of admissible state distributions (we assumed to be finite). Order them in an arbitrary manner and let the -th admissible state distribution be , for . Stacking these distributions as a matrix:

where each row is indexed by an admissible state distribution and each column is indexed by a state, and .

In reactive POMDPs, we can also define admissible distributions over hidden states . For any and , with abuse of notation, we use and to denote the distribution over hidden states (and actions) at step induced by . For any , , and , the distribution over observations can be decomposed as , where is the emission process and is independent of the policy or the timestep. Therefore, we have

From the above, we conclude that .

In the rest of the proof we describe how to construct the mixture distribution and show that it yields low concentratability coefficient . First, we factorize as the product of two matrices with full column rank and full row rank, respectively:

We know that .

Now let’s focus on , where is its -th row. Let consists of rows from that maximize the absolute value of determinant (i.e., the spanned volume). That is

Since maximizes the absolute value of the determinant, and is a full-rank square matrix. As a result, any row of is a linear combination of rows in . So there exists , such that where . We claim that always holds.

This can be proved by contradiction. Assume that , then consider the matrix

This matrix essentially replaces the -th row of with . Since is volume maximizing, the volume of should not increase. Calculating the determinant, however, we get , which causes a contradiction.

Finally, we construct the data distribution as a mixture of admissible distributions. Let and . It is easy to check that is a valid distribution. Then for any ,

Now recall that for any , there exists , such that . Since , comparing the -th row of both sides, we have

The inequality follows from the non-negativity of probabilities. Hence,