Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction

06/03/2019 ∙ by Aviral Kumar, et al. ∙ Google berkeley college 7

Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the primary drivers of the success of machine learning methods in open-world perception settings, such as computer vision 

He et al. (2016) and NLP Devlin et al. (2018)

, has been the ability of high-capacity function approximators, such as deep neural networks, to learn generalizable models from large amounts of data. Reinforcement learning (RL) has proven comparatively difficult to scale to unstructured real-world settings because most RL algorithms require active data collection. As a result, RL algorithms can learn complex behaviors in simulation, where data collection is straightforward, but real-world performance is limited by the expense of active data collection. In some domains, such as autonomous driving 

Yu et al. (2018) and recommender systems (Bennett et al., 2007), previously collected datasets are plentiful. Algorithms that can utilize such datasets effectively would not only make real-world RL more practical, but also would enable substantially better generalization by incorporating diverse prior experience.

In principle, off-policy RL algorithms can leverage this data; however, in practice, off-policy algorithms are limited in their ability to learn entirely from off-policy data. Recent off-policy RL methods (e.g., (Haarnoja et al., 2018; Munos et al., 2016; Kalashnikov et al., 2018; Espeholt et al., 2018)) have demonstrated sample-efficient performance on complex tasks in robotics Kalashnikov et al. (2018) and simulated environments Todorov et al. (2012). However, these methods can still fail to learn when presented with arbitrary off-policy data without the opportunity to collect more experience from the environment. This issue persists even when the off-policy data comes from effective expert policies, which in principle should address any exploration challenge (de Bruin et al., 2015; Fujimoto et al., 2018a; Fu et al., 2019). This sensitivity to the training data distribution is a limitation of practical off-policy RL algorithms, and one would hope that an off-policy algorithm should be able to learn reasonable policies through training on static datasets before being deployed in the real world.

In this paper, we aim to develop off-policy, value-based RL methods that can learn from large, static datasets. As we show, a crucial challenge in applying value-based methods to off-policy scenarios arises in the bootstrapping process employed when Q-functions are evaluated on out of out-of-distribution action inputs for computing the backup when training from off-policy data. This may introduce errors in the Q-function and the algorithm is unable to collect new data in order to remedy those errors, making training unstable and potentially diverging. Our primary contribution is an analysis of error accumulation in the bootstrapping process due to out-of-distribution inputs and a practical way of addressing this error. First, we formalize and analyze the reasons for instability and poor performance when learning from off-policy data. We show that, through careful action selection, error propagation through the Q-function can be mitigated. We then propose a principled algorithm called bootstrapping error accumulation reduction (BEAR) to control bootstrapping error in practice, which uses the notion of support-set matching to prevent error accumulation. Through systematic experiments, we show the effectiveness of our method on continuous-control MuJoCo tasks, with a variety of off-policy datasets: generated by a random, suboptimal, or optimal policies. BEAR is consistently robust to the training dataset, matching or exceeding the state-of-the-art in all cases, whereas existing algorithms only perform well for specific datasets.

2 Related Work

In this work, we study off-policy reinforcement learning with static datasets. Errors arising from inadequate sampling, distributional shift, and function approximation have been rigorously studied as “error propagation” in approximate dynamic programming (ADP) (Bertsekas and Tsitsiklis, 1996; Munos, 2003; Farahmand et al., 2010; Scherrer et al., 2015). These works often study how Bellman errors accumulate and propagate to nearby states via bootstrapping. In this work, we build upon tools from this analysis to show that performing Bellman backups on static datasets leads to error accumulation due to out-of-distribution values. Our approach is motivated as reducing the rate of propagation of error propagation between states.

Our approach constrains actor updates so that the actions remain in the support of the training dataset distribution. Several works have explored similar ideas in the context of off-policy learning learning in online settings. Kakade and Langford (2002) shows that large policy updates can be destructive, and propose a conservative policy iteration scheme which constrains actor updates to be small for provably convergent learning. Grau-Moya et al. (2019) use a learned prior over actions in the maximum entropy RL framework (Levine, 2018) and justify it as a regularizer based on mutual information. However, none of these methods use static datasets. Importance Sampling based distribution re-weighting Munos et al. (2016); Gelada and Bellemare (2019); Precup et al. (2001); Mahmood et al. (2015) has also been explored primarily in the context of off-policy policy evaluation.

Most closely related to our work is batch-constrained Q-learning (BCQ) (Fujimoto et al., 2018a), which also discusses instability arising from previously unseen actions. Fujimoto et al. (2018a) show convergence properties of an action-constrained Bellman backup operator in tabular, error-free settings. We prove stronger results under approximation errors and provide a bound on the suboptimality of the solution. This is crucial as it drives the design choices for a practical algorithm. As a consequence, although we experimentally find that (Fujimoto et al., 2018a) outperforms standard Q-learning methods when the off-policy data is collected by an expert, BEAR outperforms Fujimoto et al. (2018a) when the off-policy data is collected by a suboptimal policy, as is common in real-life applications. Empirically, we find BEAR achieves stronger and more consistent results than BCQ across a wide variety of datasets and environments. As we explain below, the BCQ constraint is too aggressive; BCQ generally fails to substantially improve over the behavior policy, while our method actually improves when the data collection policy is suboptimal or random.

3 Background

We represent the environment as a Markov decision process (MDP) defined by a tuple

, where is the state space, is the action space, is the transition distribution, is the initial state distribution, is the reward function, and is the discount factor. The goal in RL is to find a policy that maximizes the expected cumulative discounted rewards which is also known as the return. The notation denotes the discounted state marginal of a policy , defined as the average state visited by the policy, . is shorthand for the transition matrix from to following a certain policy , .

Q-learning learns the optimal state-action value function , which represents the expected cumulative discounted reward starting in taking action and then acting optimally thereafter. The optimal policy can be recovered from by choosing the maximizing action. Q-learning algorithms are based on iterating the Bellman optimality operator , defined as

When the state space is large, we represent as a hypothesis from the set of function approximators

(e.g., neural networks). In theory, the estimate of the

-function is updated by projecting into (i.e., minimizing the mean squared Bellman error , where is the state occupancy measure under the behaviour policy). This is also referred to a Q-iteration. In practice, an empirical estimate of is formed with samples, and treated as a supervised regression target to form the next approximate -function iterate. In large action spaces (e.g., continuous), the maximization is generally intractable. Actor-critic methods Sutton and Barto (2018); Fujimoto et al. (2018b); Haarnoja et al. (2018) address this by additionally learning a policy that maximizes the -function. In this work, we study off-policy learning from a static dataset of transitions , collected under an unknown behavior policy . We denote the distribution over states and actions induced by as .

4 Out-of-Distribution Actions in Q-Learning

Figure 1: Performance of SAC on HalfCheetah-v2 (return (left) and Q-values (right)) with off-policy expert data w.r.t. number of training samples (). Note the large discrepancy between returns (which are negative) and Q-values (which have large positive values), which is not solved with additional samples.

Q-learning methods often fail to learn on static, off-policy data, as shown in Figure 1. At first glance, this resembles overfitting, but increasing the size of the static dataset does not rectify the problem, suggesting the issue is more complex. We can understand the source of this instability by examining the form of the Bellman backup. Although minimizing the mean squared Bellman error corresponds to a supervised regression problem, the targets for this regression are themselves derived from the current Q-function estimate. The targets are calculated by maximizing the learned -values with respect to the action at the next state. However, the -function estimator is only reliable on inputs from the same distribution as its training set. As a result, naïvely maximizing the value may evaluate the estimator on actions that lie far outside of the training distribution, resulting in pathological values that incur large error. We refer to these actions as out-of-distribution (OOD) actions.

Formally, let denote the total error at iteration of Q-learning, and let denote the current Bellman error. Then, we have . In other words, errors from are discounted, then accumulated with new errors from the current iteration. We expect to be high on OOD states and actions, as errors at these state-actions are never directly minimized while training.

To mitigate bootstrapping error, we can restrict the policy to ensure that it output actions that lie in the support of the training distribution. This is distinct from previous work (e.g., BCQ (Fujimoto et al., 2018a)) which implicitly constrains the distribution of the learned policy to be close to the behavior policy, similarly to behavioral cloning Schaal (1999)

. While this is sufficient to ensure that actions lie in the training set with high probability, it is overly restrictive. For example, if the behavior policy is close to uniform, the learned policy will behave randomly, resulting in poor performance, even when the data is sufficient to learn a strong policy (see Figure 

2 for an illustration). Our analysis instead reveals a tradeoff between staying within the data distribution and finding a suboptimal solution when the constraint is too restrictive. Our analysis motivates us to restrict the support of the learned policy, but not the probabilities of the actions lying within the support. This avoids evaluating the Q-function estimator on OOD actions, but remains flexible in order to find a performant policy. Our proposed algorithm leverages this insight.

4.1 Distribution-Constrained Backups

In this section, we define and analyze a backup operator that restricts the set of policies used in the maximization of the Q-function, and we derive performance bounds which depend on the restricted set. This provides motivation for constraining policy support to the data distribution. We begin with the definition of a distribution-constrained operator:

Definition 4.1 (Distribution-constrained operators).

Given a set of policies , the distribution-constrained backup operator is defined as:

This backup operator satisfies properties of the standard Bellman backup, such as convergence to a fixed point, as discussed in Appendix A. To analyze the (sub)optimality of performing this backup under approximation error, we first quantify two sources of error. The first is a suboptimality bias. The optimal policy may lie outside the policy constraint set, and thus a suboptimal solution will be found. The second arises from distribution shift between the training distribution and the policies used for backups. This formalizes the notion of OOD actions. To capture suboptimality in the final solution, we define a suboptimality constant, which measures how far is from .

Definition 4.2 (Suboptimality constant).

The suboptimality constant is defined as:

Next, we define a concentrability coefficient (Munos, 2005), which quantifies how far the visitation distribution generated by policies from is from the training data distribution. This constant captures the degree to which states and actions are out of distribution.

Assumption 4.1 (Concentrability).

Let denote the initial state distribution, and denote the distribution of the training data over , with marginal over . Suppose there exist coefficients such that for any and :

where is the transition operator on states induced by . Then, define the concentrability coefficient as

To provide some intuition for , if was generated by a single policy , and was a singleton set, then we would have , which is the smallest possible value. However, if contained policies far from , the value could be large, potentially infinite if the support of is not contained in . Now, we bound the performance of approximate distribution-constrained Q-iteration:

Theorem 4.1.

Suppose we run approximate distribution-constrained value iteration with a set constrained backup . Assume that bounds the Bellman error. Then,

Proof.

See Appendix B, Theorem B.1

This bound formalizes the tradeoff between keeping policies chosen during backups close to the data (captured by ) and keeping the set large enough to capture well-performing policies (captured by ). When we expand the set of policies , we are increasing but decreasing . An example of this tradeoff, and how a careful choice of can yield superior results, is given in a tabular gridworld example in Fig. 2, where we visualize errors accumulated during distribution-constrained Q-iteration for different choices of .

Finally, we motivate the use of support sets to construct . We are interested in the case where , where is the behavior policy (i.e., is the set of policies that have support in the probable regions of the behavior policy). Defining in this way allows us to bound the concentrability coefficient:

Theorem 4.2.

Assume the data distribution is generated by a behavior policy . Let be the marginal state distribution under the data distribution. Define and let be the highest discounted marginal state distribution starting from the initial state distribution and following policies at each time step thereafter. Then, there exists a concentrability coefficient which is bounded:

where .

Proof.

See Appendix B, Theorem B.2

Qualitatively, is the minimum discounted visitation marginal of a state under the behaviour policy if only actions which are more than likely are executed in the environment. Thus, using support sets gives us a single lever, , which simultaneously trades off the value of and . Not only can we provide theoretical guarantees, we will see in our experiments (Sec. 6) that constructing in this way provides a simple and effective method for implementing distribution-constrained algorithms.

Intuitively, this means we can prevent an increase in overall error in the Q-estimate by selecting policies supported on the support of the training action distribution, which would ensure roughly bounded projection error while reducing the suboptimality bias, potentially by a large amount. Bounded error on the support set of the training distribution is a reasonable assumption when using highly expressive function approximators, such as deep networks, especially if we are willing to reweight the transition set Schaul et al. (2016); Fu et al. (2019). We further elaborate on this point in Appendix C.

Figure 2: Visualized error propagation in Q-learning for various choices of the constraint set : unconstrained (top row) distribution-constrained (middle), and constrained to the behaviour policy (policy-evaluation, bottom). Triangles represent Q-values for actions that move in different directions. The task (left) is to reach the bottom-left corner (G) from the top-left (S), but the behaviour policy (visualized as arrows in the task image, support state-action pairs are shown in black on the support set image) travels to the bottom-right with a small amount of -greedy exploration. Dark values indicate high error, and light values indicate low error. Standard backups propagate large errors from the low-support regions into the high-support regions, leading to high error. Policy evaluation reduces error propagation from low-support regions, but introduces significant suboptimality bias, as the data policy is not optimal. A carefully chosen distribution-constrained backup strikes a balance between these two extremes, by confining error propagation in the low-support region while introducing minimal suboptimality bias.

5 Bootstrapping Error Accumulation Reduction (BEAR)

We now propose a practical actor-critic algorithm (built on the framework of TD3 Fujimoto et al. (2018b) or SAC Haarnoja et al. (2018)) that uses distribution-constrained backups to reduce accumulation of bootstrapping error. The key insight is that we can search for a policy with the same support as the training distribution, while preventing accidental error accumulation. Our algorithm has two main components. We use ensembles of Q-functions to provide a conservative estimate of the Q-function for policy improvement, and design a constraint which will be used for searching over the set of policies , which share the same support as the behaviour policy. Both of these components will appear as modifications of the policy improvement step in actor-critic style algorithms.

We use an ensemble of Q-functions to compute a conservative estimate of the Q-values: , where

is a hyperparameter. Then, the policy is updated to maximize the conservative estimate of the Q-values within

:

In practice, the behaviour policy is unknown, so we need an approximate way to constrain to . We define a differentiable constraint that approximately constrains to , and then approximately solve the constrained optimization problem via dual gradient descent. We use the sampled version of maximum mean discrepancy (MMD) Gretton et al. (2012) between the unknown behaviour policy and the actor because it can be estimated based solely on samples from the distributions. Given samples and , the sampled MMD between and is given by:

Here, is any universal kernel. In our experiments, we find both Laplacian and Gaussian kernels work well. The expression for MMD doesn’t involve the density of either distribution and it can be optimized directly through samples. Empirically we find that, in the low-intermediate sample regime, the sampled MMD between and

is similar to the MMD between a uniform distribution over

’s support and , which makes MMD roughly suited for constraining distributions to a given support set. (See Appendix C.3 for numerical simulations justifying this approach).

Putting everything together, the optimization problem in the policy improvement step is

(1)

where is an approximately chosen threshold. We choose a threshold of in our experiments. The algorithm is summarized in Algorithm 1. Step 5 of the algorithm performs a stochastic version of the distribution-constrained backup, where Dirac-delta policies are sampled, an expectation of the target Q-value under these Dirac-delta policies is computed and then the maximum value across these policies is backed up as defined by the backup operator. We provide more explanation in Appendix C.

0:  : Dataset , target network update rate , mini-batch size , sampled actions for MMD , minimum
1:  Initialize Q-ensemble , actor , Lagrange multiplier , target networks , and a target actor , with
2:  for  in {1, …, N} do
3:     Sample mini-batch of transitions Q-update:
4:     Sample action samples,
5:     Define
6:     Policy-update:
7:     Sample actions and , preferably an intermediate integer(1-10)
8:     Update , by minimizing Equation 1 by using dual gradient descent with Lagrange multiplier
9:     Update Target Networks: ;
10:  end for
Algorithm 1 BEAR Q-Learning (BEAR-QL)

In summary, the actor is updated towards maximizing the Q-function while still being constrained to remain in the valid search space defined by . The Q-function uses actions sampled from the actor to then perform distribution-constrained Q-learning, over a reduced set of policies. Implementation and other details are present in Appendix D.

6 Experiments

In our experiments, we study how BEAR-QL performs when learning from static off-policy data on a variety of continuous control benchmark tasks. We evaluate our algorithm in three settings: when the dataset is generated by (1) a completely random behaviour policy, (2) a partially trained, medium scoring policy, and (3) an optimal policy. Condition (2) is of particular interest, as it captures many common use-cases in practice, such as learning from imperfect demonstration data (e.g., of the sort that are commonly available for autonomous driving Gao et al. (2018)), or reusing previously collected experience during off-policy RL. We compare our method to two prior methods: a baseline actor-critic algorithm (TD3), the BCQ algorithm (Fujimoto et al., 2018a), which aims to address a similar problem, as discussed in Section 4, and a behaviour cloning (BC) baseline, which simply imitates the data distribution. This serves to measure whether each method actually performs effective RL, or simply copies the data. We report the average evaluation return of the policy given by the learned algorithm, in the form of a learning curve as a function of number of gradient steps taken by the algorithm. These samples are only collected for evaluation, and are not used for training.

6.1 Performance on Medium-Quality Data

We first discuss the evaluation of condition with “mediocre” data (2), as this condition resembles the settings where we expect training on offline data to be most useful. We collected 1e6 transitions from a partially trained policy, so as to simulate imperfect demonstration data or data from a mediocre prior policy. In this scenario, we found that BEAR-QL consistently outperforms both BCQ Fujimoto et al. (2018a) and a naïve off-policy RL baseline (TD3) by large margins, as shown in Figure 3. This scenario is the most relevant from an application point of view, as access to optimal data may not be feasible, and random data might have inadequate exploration to efficient learn a good policy. We also evaluate the accuracy with which the learned Q-functions predict actual policy returns. These trends are provided in Appendix E. Note that the performance of BCQ often tracks the performance of the BC baseline, suggesting that BCQ primarily imitates the data.

Figure 3: Performance of BEAR-QL, BCQ, Naïve RL and BC on medium-quality data. BEAR-QL outperforms both BCQ and Naïve RL. Average return over the training data is indicated by the magenta line.

6.2 Performance on Random and Optimal Datasets

In Figure 4, we show the performance of each method when trained on data from a random policy (top) and a near-optimal policy (bottom). In both cases, our BEAR-QL method achieves good results, consistently exceeding the average dataset return on random data, and matching the optimal policy return on optimal data. Naïve RL also often does well on random data. For a random data policy, all actions are in-distribution, since they all have equal probability. This is consistent with our hypothesis that OOD actions are one of the main sources of error in off-policy learning on static datasets. The prior BCQ method Fujimoto et al. (2018a) performs well on optimal data but performs poorly on random data, where the constraint is too strict. These results show that BEAR-QL is robust to the dataset composition, and can learn consistently in a variety of settings.

Figure 4: Performance of BEAR-QL, BCQ, Naïve RL and BC on random data (top row) and optimal data (bottom row). BEAR-QL is the only algorithm capable of learning in both scenarios. Naïve RL cannot handle optimal data, since it does not illustrate mistakes, and BCQ favors a behavioral cloning strategy (performs quite close to behaviour cloning in most cases), causing it to fail on random data. Average return over the training dataset is indicated by the dashed magenta line.

6.3 Analysis of BEAR-QL

In this section, we aim to analyze different components of our method via an ablation study. Our first ablation studies the support constraint discussed in Section 5, which uses MMD to measure support. We replace it with a more standard KL-divergence distribution constraint, which measures similarity in density. Our hypothesis is that this should provide a more conservative constraint, since matching distributions is not necessary for matching support. KL-divergence performs well in some cases, such as with optimal data, but as shown in Figure 5, it performs worse than MMD on medium-quality data. Even when KL-divergence is hand tuned fully, so as to prevent instability issues it still performs worse than a not-well tuned MMD constraint. We provide the results for this setting in the Appendix. We also vary the number of samples that are used to compute the MMD constraint. We find that smaller n ( 4 or 5) gives better performance.

Figure 5: Average return (averaged Hopper-v2 and Walker2d-v2) as a function of train steps for ablation studies from Section 6.3

. (a) MMD constrained optimization is more stable and leads to better returns, (b) 4 sample MMD is more performant than 10, and (c) Ensemble variance has mixed benefit.

Next, we study whether using a conservative Q-value estimate by subtracting the variance in the ensemble helps with learning. As shown in Figure 5, the conservative estimate makes a comparatively smaller difference than the use of MMD, providing some benefit on one task, while somewhat hurting performance on another. The ensemble produces more conservative estimates, which can result in underestimation in practice, and prevent overestimation divergence.

7 Discussion and Future Work

The goal in our work was to study off-policy reinforcement learning with static datasets. We theoretically and empirically analyze how error propagates in off-policy RL due to the use of out-of-distribution actions for computing the target values in the Bellman backup. Our experiments suggest that this source of error is one of the primary issues afflicting off-policy RL: increasing the number of samples does not appear to mitigate the degradation issue (Figure 1), and training with naïve RL on data from a random policy, where there are no out-of-distribution actions, shows much less degradation than training on data from more focused policies (Figure 4). Armed with this insight, we develop a method for mitigating the effect of out-of-distribution actions, which we call BEAR-QL. BEAR-QL constrains the backup to use actions that have non-negligible support under the data distribution, but without being overly conservative in constraining the learned policy. We observe experimentally that BEAR-QL achieves good performance across a range of tasks, and across a range of dataset compositions, learning well on random, medium-quality, and expert data.

While BEAR-QL substantially stabilizes off-policy RL, we believe that this problem merits further study. One limitation of our current method is that, although the learned policies are much more performant than those acquired with naïve RL, performance sometimes still tends to degrade for long learning runs. An exciting direction for future work would be to develop an early stopping condition for RL, perhaps by generalizing the notion of validation error to reinforcement learning. Another promising future direction is to examine how well BEAR-QL can work on large-scale off-policy learning problems, of the sort that are likely to arise in domains such as robotics, autonomous driving, operations research, and commerce. If RL algorithms can learn effectively from large-scale off-policy datasets, reinforcement learning can become a truly data-driven discipline, benefiting from the same advantage in generalization that has been seen in recent years in supervised learning fields, where large datasets have enabled rapid progress in terms of accuracy and generalization 

Deng et al. (2009).

Acknowledgements

We thank Kristian Hartikainen for sharing implementations of RL algorithms and for help in debugging certain issues. We thank Matthew Soh for a lot of help in setting up environments. We thank Chelsea Finn, Abhishek Gupta and Kelvin Xu for informative discussions. We thank Ofir Nachum for comments on an earlier draft of this paper. We thank Google, NVIDIA, and Amazon for providing computational resources. This research was supported by Berkeley DeepDrive, NSF IIS-1651843 and IIS-1614653, the DARPA Assured Autonomy program, and ARL DCIST CRA W911NF-17-2-0181.

References

Appendix A Distribution-Constrained Backup Operator

In this section, we analyze properties of the constrained Bellman backup operator, defined as:

where

Such an operator can be reduced to a standard Bellman backup in a modified MDP. We can construct an MDP from the original MDP as follows:

  • The state space, discount, and initial state distributions remain unchanged from .

  • We define a new action set to be the choice of policy to execute.

  • We define the new transition distribution as taking one step under the chosen policy to execute and one step under the original dynamics : .

  • Q-values in this new MDP, would, in words, correspond to executing policy for one step and executing the policy which maximizes the future discounted value function in the original MDP thereafter.

Under this redefinition, the Bellman operator is mathematically the same operation as the Bellman operator under . Thus, standard results from MDP theory carry over - i.e. the existence of a fixed point and convergence of repeated application of to said fixed point.

Appendix B Error Propagation

In this section, we provide proofs for Theorem 4.1 and Theorem 4.2.

Theorem B.1.

Suppose we run approximate distribution-constrained value iteration with a set constrained backup . Assume that bounds the Bellman error. Then,

Proof.

We first begin by introducing , the fixed point of . By the triangle inequality, we have:

First, we note that provides an upper bound on the value error:

We can bound with

by direct modification of the proof of Theorem 3 of Farahmand et al. [2010] or Theorem 1 of Munos [2005] with (), but replacing with and with , as is a contraction and is its fixed point. An alternative proof involves viewing as a backup under a modified MDP (see Appendix A), and directly apply Theorem 1 of Munos [2005] under this modified MDP. A similar bound also holds true for value iteration with the operator which can be analysed on similar lines as the above proof and Munos [2005].

To bound , we provide a simple -norm bound, although we could in principle apply techniques used to bound to get a tighter distribution-based bound.

Thus, we have . Because the maximum is greater than the expectation, .

Adding and completes the proof. ∎

Theorem B.2.

Assume the data distribution is generated by a behavior policy , such that . Let be the marginal state distribution under the data distribution. Let us define . Then, there exists a concentrability coefficient is bounded as:

where .

Proof.

For notational clarity, we refer to as in this proof. The term is the highest discounted marginal state distribution starting from the initial state distribution and following policies . Formally, it is defined as:

Now, we begin the proof of the theorem. We first note, from the definition of , . This suggests a bound on the total variation distance between and any for all , . This means that the marginal state distributions of and , are bounded in total variation distance by: , where is the marginal state distribution as defined above. This can be derived from Schulman et al. [2015], Appendix B, which bounds the difference in returns of two policies by showing the state marginals between two policies are bounded if their total variation distance is bounded.

Further, the definition of the set of policies implies that , where is a constant that depends on and captures the minimum visitation probability of a state when rollouts are executed from the initial state distribution while executing the behaviour policy , under the constraint that only actions with are selected for execution in the environment. Combining it with the total variation divergence bound, , we get that

We know that, is the ratio of the marginal state visitation distribution under the policy iterates when performing backups using the distribution-constrained operator and the data distribution . Therefore,

Appendix C Additional Details Regarding BEAR-QL

In this appendix, we address several remaining points regarding the support matching formulation of BEAR-QL, and further discuss its connections to prior work.

c.1 Why can we choose actions from , the support of the training distribution, and need not restrict action selection to the policy distribution?

In Section 4.1, we designed a new distribution-constrained backup and analyzed its properties from an error propagation perspective. Theorems 4.1 and 4.2 tell us that, if the maximum projection error on all actions within the support of the train distribution is bounded, then the worst-case error incurred is also bounded. That is, we have a bound on . In this section, we provide an intuitive explanation for why action distributions that are very different from the training policy distributions, but still lie in the support of the train distribution, can be chosen without incurring large error. In practice, we use powerful function approximators for Q-learning, such as deep neural networks. That is, is the Bellman error for one iteration of Q-iteration/Q-learning, which can essentially be viewed as a supervised regression problem with a very expressive function class. In this scenario, we expect a bounded error on the entire support of the training distribution, and we therefore expect approximation error to depend less on the specific density of a datapoint under the data distribution, and more on whether or not that datapoint is within the support of the data distribution. I.e., any point that is within the support would have a comparatively low error, due to the expressivity of the function approximator.

Another justification is that, a different version of the Bellman error objective renormalizes the action-distributions to the uniform distribution by applying an inverse behavior policy density weighting. For example, Antos et al. [2008, 2007] use this variant of Bellman error:

This implies that this form of Bellman error mainly depends upon the support of the behaviour policy (i.e. the set of action samples sampled from with a high-enough probability which we formally refer to as in the main text). In a scenario when this form of Bellman error is being minimized, is defined as

The overall error, hence, incurred due to error propagation is expected to be insensitive to distribution change, provided the support of the distribution doesn’t change. Therefore, all policies incur the same amount of propagated error () whereas different amount of suoptimality biases – suggesting the existence of a different policy in which propagates the same amount of error while having a lower suboptimality bias. However, in practice, it has been observed that using the inverse density weighting under the behaviour policy doesn’t lead to substantially better performance for vanilla RL (not in the setting with purely off-policy, static datasets), so we use the unmodified Bellman error objective.

Both of these justifications indicate that bounded is reasonable to expect under in-support action distributions.

c.2 Details on connection between BEAR-QL and distribution-constrained backups

Distribution-constrained backups perform maximization over a set of policies which is defined as the set of policies that share the support with the behaviour policy. In the BEAR-QL algorithm, is maximized towards maximizing the expected Q-value for each state under the action distribution defined by it, while staying in-support (through the MMD constraint). The maximization step biases towards the in-support actions which maximize the current Q-value. By sampling multiple Dirac-delta action distributions -

- and then performing an explicit maximum over them for computing the target is a stochastic approximation to the distribution-constrained operator. What is the importance of training the actor to maximize the expected Q-value? We found empirically that this step is important as without this maximization step and high-dimensional action spaces, it is likely to require many more samples (exponentially more, due to curse of dimensionality) to get the correct action that maximizes the target value while being in-support. This is hard and unlikely, and in some experiments we tried with this variant, we found it to lead to suboptimal solutions. At evaluation time, we use the Q-function as the actor. The same process is followed. Dirac-delta action distribution candidates

are sampled, and then the action that is gives the empirical maximum over the Q-function values is the action that is executed in the environment.

c.3 How effective is the constraint in constraining supports of distributions?

In Section 5, we argued in favour of the usage of the sampled distance between distributions to search for a policy that is supported on the same support as the train distribution. Revisiting the argument, in this section, we argue, via numerical simulations, the effectiveness of the

distance between two probability distributions in constraining the support of the distribution being learned, without constraining the distribution density function too much. While, MMD distance computed exactly between two distribution functions will match distributions exactly and that explains its applicability in 2-sample tests, however, with a limited number of samples, we empirically find that the values of the

distance computed using samples from two

-dimensional Gaussian distributions with diagonal covariance matrices:

and is roughly equal to the distance computed using samples from and . This means that when minimizing the distance to train distribution , the gradient signal would push towards a uniform distribution supported on ’s support as this solution exhibits a lower MMD value – which is the objective we are optimizing.

Figure 6 shows an empirical comparison of when , computed by sampling -samples from , and (also when = ) computed by sampling -samples from . We observe that distance computed using limited samples can, in fact, be higher between a distribution and itself as compared to a uniform distribution over a distribution’s support and itself. In Figure 6, note that for smaller values of and appropriately chosen (mentioned against each figure, the support of the uniform distribution), the estimator for can provide lower estimates than the value of the estimator for . This observation suggests that when the number of samples is not enough to sample infer the distribution shape, density-agnostic distances like MMD can be used as optimization objectives to push distributions to match supports. Subfigures (c) and (d) shows the increase in MMD distance as the support of the uniform distribution is expanded.

(a) ,
(b) ,
(c) ,
(d) ,
Figure 6: Comparing distance between a -d Gaussian distribution () and itself (), and a uniform distribution over support set of the and the distribution . The Parameters of the Gaussian distribution () and the uniform distribution being considered are mentioned against each plot. (’Self’ refers to and ’Uniform’ refers to .) Note that for small values of , the with the Uniform distribution is slightly lower in magnitude than the between the distribution and itself (sub-figures (a), (b) and (c)). For (d), as the support of this uniform distribution is enlarged, this leads to an increase in the value of in the uniform approximation case – which suggests that a near-local minimizer for the distance can be obtained by making sure that the distribution which is being trained in this process shares the same support as the other given distribution.

In order to provide a theoretical example, we refer to Example 1 in Gretton et al. [2012], and extend it. First, note that the example argues that a fixed sample size of samples drawn from a distribution , there exists another discrete distribution supported on samples from the support set of , such that there atleast is a probability that a sample from is indeed a sample from as well. So, with a smaller value of , no 2-sample test will be able to distinguish between and . We would also note that this example is exactly the argument that our algorithm build upon. We further extend this example by noting that if were rather not completely supported on the support of , then there exists atleast a probability of that a sample from lies outside the support of . This gives us a lower bound on the value of the estimator, indicating that the 2-sample test will be able to detect this distribution due to an irreducible difference of (where is an "extremal point" in ’s support) in the MMD estimate.

Appendix D Additional Experimental Details

Data collection

We trained behaviour policies using the Soft Actor-Critic algorithm Haarnoja et al. [2018]. In all cases, random data was generated by running a uniform at random policy in the environment. Optimal data was generated by training SAC agents in all 4 domains until convergence to the returns mentioned in Figure 4. Mediocre data was generated by training a policy until the return value marked in each of the plots in Figure 3. Each of our datasets contained 1e6 samples. We used the same datasets for evaluating different algorithms to maintain uniformity across results.

Choice of kernels

In our experiments, we found that the choice of the kernel is an important design decision that needs to be made. In general, we found that a Laplacian kernel worked well in all cases. Gaussian kernel worked quite well in the case of optimal dataset. For the Laplacian kernel, we chose for Cheetah and Hopper, and for Ant and Walker. However, we found that worked well for all environments in all settings. For the Gaussian kernel, we chose for all settings. Kernels often tend to not provide relevant measurements of distance especially in high-dimensional spaces, so one direction for future work is to design right kernels. We further experimented with a mixture of Laplacian kernel with different bandwidth parameters () on Hopper-v2 and Walker2d-v2 where we found that it performs comparably and sometimes is better than a simple Laplacian kernel, probably because it is able to track supports upto different levels of thresholds due to multiple kernels.

More details about the algorithm

In BEAR-QL (Algorithm 1), one aspect that has not been elaborated upon in the main text due to lack of space are the details of the algorithm at evaluation time. At evaluation time, we find that using the greedy maximum of the Q-function over the support set of the behaviour policy (which can be approximated by sampling multiple Dirac-delta policies from the policy and performing a greedy maximization of the Q-values over these Dirac-delta policies) works best, better than unrolling the learned actor in the environment. This was also found useful in Fujimoto et al. [2018a]. Another detail about the algorithm is deciding which samples to use for computing the objective. We train a parameteric model which fits a tanh-Gaussian distribution to given the states , and then use this to sample a candidate actions for computing the MMD-distance, meaning that MMD is computed between and . We find the latter to work better in practice. Also, computing the distance between actions before applying the tanh transformation work better, and leads to a constraint, that perhaps provides stronger gradient signal – because tanh saturates very quickly, after which gradients almost vanish.

Other hyperparameters

Other hyperparameters include the following – (1) The variance of the Gaussian

/(standard deviation of) Laplacian kernel

: We tried a variance of 10, 20, and 40. We found that 20 worked well across all tasks; (2) The coefficient weight assigned to the ensemble variance term – we tried 0.6 and 1, and found 0.6 to be better among the two, so just used that. Each run is averaged over 5 seeds. Error bars are present in the form of variance bands in each plot. The learning rate for the Lagrange multiplier was chosen to be 1e-3, and the of the Lagrange multiplier was clipped between to prevent instabilities. For the baselines, we used BCQ code from the official implementation accompanying Fujimoto et al. [2018a], TD3 code from the official implementation accompanying Fujimoto et al. [2018b] and the BC baseline was the VAE-based behaviour cloning baseline also used in Fujimoto et al. [2018a], where it was shown that the VAE-based behaviour cloning and vanilla BC perform similarly. We evaluated on 10 evaluation episodes (which were separate from the train distribution) after every 1000 iterations and used the average score and the variance for the plots.

Appendix E Additional Experimental Results

Figure 7: The trend of the difference between the Q-values and Monte-Carlo returns: returns for 2 environments. Note that a high value of corresponds to more overestimation. In these plots, BEAR-QL is more well behaved than BCQ. In Walker2d-v2, BCQ tends to diverge in the negative direction. In the case of Ant-v2, although roughly the same, the difference between Q values and Monte-carlo returns is slightly lower in the case of BEAR-QL suggestion no risk of overestimation. (This corresponds to medium-quality data.)
Figure 8: Effect of using a conservative estimate of the Q-function (computed by subtracting the ensemble sample variance). We find that not using the ensemble variance improves performance in Walker2d-v2 where there’s a natural tendency for underestimation. On Ant-v2 and Hopper-v2 tasks, the performance is roughly unchanged. This corresponds to medium-quality data.
Figure 9: The trends of Q-values as a function of number of gradient steps taken in case of 3 environments. BCQs Q-values tend to be more unstable (especially in the case of Walker2d, where they diverge in the negative direction) as compared to BEAR-QL. This corresponds to medium-quality data.

In this section, we provide some extra plots for some extra experiments. In Figure 7 we provide the difference between learned Q-values and Monte carlo returns of the policy in the environment. In Figure 8, we provide a comparison of performance when ensembles are used and when they aren’t used. In Figure 9 we provide the trends of comparisons of Q-values learned byt BEAR-QL and BCQ in three environments. In Figure 10 we compare the performance when using the MMD constraint vs using the KL constraint in the case of three environments. In order to be fair at comparing to MMD, we train a model for the behaviour policy and constrain the KL-divergence to this behaviour policy. (For MMD, we compute MMD using samples from the model of the behaviour policy.) Note that in the case of Half Cheetah with medium-quality data, KL divergence constraint works pretty well, but it fails drastically in the case of Hopper and Walker2d and the Q-values tend to diverge. Figure 10 summarizes the trends for 3 environments.

Figure 10: Performance Trends (measured in AverageReturn) for Hopper-v2, HalfCheetah-v2 and Walker2d-v2 environments with BEAR-QL algorthm but varying kind of constraint. In general we find that using the KL constraint leads to worse performance. However, in some rare cases (for example, HalfCheetah-v2), the KL constraint learns faster. In general, we find that the KL-constraint often leads to diverging Q-values. This experiment corresponds to medium-quality data.

We further study the performance of the KL-divergence in the setting when the KL-divergence is stable. In this setting we needed to perform extensive hyperparameter tuning to find the optimal Lagrange multiplier for the KL-constraint and plain and simple dual descent always gave us an unstable solution with the KL-constraint. Even in this case tuned hyperparameter case, we find that using a KL-constraint is worse than using a MMD-constraint. Trends are summarized in Figure 11.

Figure 11: Performance Trends (measured in Average Returns) for Hopper-v2 and Walker2d-v2 environments with BEAR-QL algorithm with an extensively tuned KL-constraint and the MMD-constraint from. Note that the MMD-constraint still outperforms the KL-constraint.