Code for MOPO: Model-based Offline Policy Optimization
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a batch of previously collected data. This problem setting is compelling, because it offers the promise of utilizing large, diverse, previously collected datasets to acquire policies without any costly or dangerous active exploration, but it is also exceptionally difficult, due to the distributional shift between the offline training data and the learned policy. While there has been significant progress in model-free offline RL, the most successful prior methods constrain the policy to the support of the data, precluding generalization to new states. In this paper, we observe that an existing model-based RL algorithm on its own already produces significant gains in the offline setting, as compared to model-free approaches, despite not being designed for this setting. However, although many standard model-based RL methods already estimate the uncertainty of their model, they do not by themselves provide a mechanism to avoid the issues associated with distributional shift in the offline setting. We therefore propose to modify existing model-based RL methods to address these issues by casting offline model-based RL into a penalized MDP framework. We theoretically show that, by using this penalized MDP, we are maximizing a lower bound of the return in the true MDP. Based on our theoretical results, we propose a new model-based offline RL algorithm that applies the variance of a Lipschitz-regularized model as a penalty to the reward function. We find that this algorithm outperforms both standard model-based RL methods and existing state-of-the-art model-free offline RL approaches on existing offline RL benchmarks, as well as two challenging continuous control tasks that require generalizing from data collected for a different task.READ FULL TEXT VIEW PDF
In offline reinforcement learning (RL), the goal is to learn a successfu...
Scaling reinforcement learning (RL) to recommender systems (RS) is promi...
Offline reinforcement learning enables agents to leverage large pre-coll...
Many model-based reinforcement learning (RL) methods follow a similar
State-of-the-art reinforcement learning algorithms mostly rely on being
Offline reinforcement learning (RL) enables learning policies using
Offline learning is a key part of making reinforcement learning (RL) use...
Code for MOPO: Model-based Offline Policy Optimization
in computer vision, SQuADrajpurkar2016squad in NLP, and RoboNet dasari2019robonet in robot learning. Reinforcement learning (RL) methods, in contrast, struggle to scale to many real-world applications, e.g., autonomous driving yu2018bdd100k and healthcare gottesman2019guidelines, because they rely on costly online trial-and-error. However, pre-recorded datasets in domains like these can be large and diverse. Hence, designing RL algorithms that can learn from those diverse, static datasets would both enable more practical RL training in the real world and lead to more effective generalization.
While off-policy RL algorithms lillicrap2015continuous; haarnoja2018soft; fujimoto2018addressing can in principle utilize previously collected datasets, they perform poorly without online data collection. These failures are generally caused by large extrapolation error when the Q-function is evaluated on out-of-distribution actions fujimoto2018off; kumar2019stabilizing. Without online interaction, these errors can lead to unstable learning and divergence. Offline reinforcement learning methods present an alternative direction. These methods learn from myriad offline data and have the potential to generalize broadly since diverse data is practical to collect in one batch and subsequently reuse. To do so, a wide range of offline RL methods propose to mitigate bootstrapped error by constraining the learned policy to the behavior policy induced by the dataset fujimoto2018off; kumar2019stabilizing; wu2019behavior; jaques2019way; nachum2019algaedice; peng2019advantage; siegel2020keep. While these methods achieve reasonable performances in some settings, their learning is limited to behaviors within the data manifold. Specifically, these methods estimate error with respect to out-of-distribution actions, but only consider states that lie within the offline dataset and do not consider those that are out-of-distribution. We argue that it is important for an offline RL algorithm to be equipped with the ability to leave the data support to learn a better policy for two reasons: (1) the provided batch dataset is usually sub-optimal in terms of both the states and actions covered by the dataset, and (2) the target task can be different from the tasks performed in the batch data for various reasons, e.g., because data is not available or hard to collect for the target task. Hence, the central question that this work is trying to answer is: can we develop an offline RL algorithm that generalizes beyond the state and action support of the offline data?
To approach this question, we first hypothesize that model-based RL methods sutton1991dyna; deisenroth2011pilco; levine2013guided; kumar2016optimal; janner2019trust; luo2018algorithmic
make a natural choice for enabling generalization, for a number of reasons. First, model-based RL algorithms effectively receive more supervision, since the model is trained on every transition, even in sparse-reward settings. Second, they are trained with supervised learning, which provides more stable and less noisy gradients than bootstrapping. Lastly, uncertainty estimation techniques, such as bootstrap ensembles, are well developed for supervised learning methodslakshminarayanan2017simple; kuleshov2018accurate; snoek2019can and are known to perform poorly for value-based RL methods wu2019behavior. All of these attributes have the potential to improve or control generalization. As a proof-of-concept experiment, we evaluate two state-of-the-art off-policy model-based and model-free algorithms, MBPO janner2019trust and SAC (haarnoja2018soft), in Figure 1. Although neither method is designed for the batch setting, we find that the model-based method and its variant without ensembles show surprisingly large gains. This finding corroborates our hypothesis, suggesting that model-based methods are particularly well-suited for the batch setting, motivating their use in this paper.
Despite these promising preliminary results, we expect significant headroom for improvement. In particular, because offline model-based algorithms cannot improve the dynamics model using additional experience, we expect that such algorithms require a careful and calculated use of the model in regions outside of the data support, to achieve top performance. Quantifying the risk imposed by imperfect dynamics and appropriately trading off that risk with the return is a key ingredient towards building a strong offline model-based RL algorithm. To do so, we modify MBPO to incorporate a reward penalty based on an estimate of the model error. Crucially, this estimate is model-dependent, and does not necessarily penalize all out-of-distribution states and actions equally, but rather prescribes penalties based on the estimated magnitude of model error. Further, this estimation is done both on states and actions, allowing generalization to both, in contrast to model-free approaches that only reason about uncertainty with respect to actions.
The primary contribution of this work is an offline model-based RL algorithm that optimizes a policy in a penalized model MDP, where the reward function is penalized by an estimate of the model’s error. Under this new MDP, we theoretically show that we maximize a lower bound of the return in the true MDP, and find the optimal trade-off between the return and the risk. Based on our analysis, we develop a practical method that estimates model error using the predicted variance of a Lipschitz-regularized model, uses this uncertainty estimate as a reward penalty, and trains a policy using MBPO in this uncertainty-penalized MDP. We empirically compare this approach, model-based offline policy optimization (MOPO), to both MBPO and existing state-of-the-art model-free offline RL algorithms. Our results suggest that MOPO substantially outperforms these prior methods on the offline RL benchmark D4RL fu2020d4rl as well as on offline RL problems where the agent must generalize to out-of-distribution states in order to succeed.
Reinforcement learning algorithms are well-known for their ability to acquire behaviors through online trial-and-error in the environment barto1983neuronlike; sutton1998reinforcement. However, such online data collection can incur high sample complexity mnih2016asynchronous; schulman2015trust; schulman2017proximal, limit the power of generalization to unseen random initialization cobbe2018quantifying; zhang2018dissection; bengio2020interference, and pose risks in safety-critical settings thomas2015safe. These requirements often make real-world applications of RL to be less practical. To overcome some of the sample efficiency challenges, we study the batch offline RL setting lange2012batch. While many off-policy RL algorithms precup2001off; degris2012off; jiang2015doubly; munos2016safe; lillicrap2015continuous; haarnoja2018soft; fujimoto2018addressing; gu2016q; gu2017interpolated can in principle be applied to a batch offline setting, they perform poorly in practice fujimoto2018off due to poor extrapolation to out-of-distribution actions.
Model-free Offline RL. Many model-free batch RL methods are designed with two main ingredients: (1) constraining the learned policy to be closer to the behavioral policy either explicitly fujimoto2018off; kumar2019stabilizing; wu2019behavior; jaques2019way; nachum2019algaedice or implicitly peng2019advantage; siegel2020keep, and (2) applying uncertainty quantification techniques, such as ensembles, to stabilize Q-functions agarwal2019striving; kumar2019stabilizing; wu2019behavior. In contrast, our model-based method does not rely on constraining the policy to the behavioral distribution, allowing the policy to potentially benefit from taking actions outside of it. Furthermore, we utilize uncertainty quantification to quantify the risk of leaving the behavioral distribution and trade it off with the gains of exploring diverse states.
Model-based Online RL. Our approach builds upon the wealth of prior work on model-based online RL methods that models the dynamics by Gaussian processes deisenroth2011pilco, local linear models levine2013guided; kumar2016optimal, neural network function approximators draeger1995model; gal2016improving; depeweg2016learning, and neural video prediction models ebert2018visual; kaiser2019modelbased. Our work is orthogonal to the choice of model. While prior approaches have used these models to select actions using planning tamar2016value; finn2017deep; racaniere2017imagination; oh2017value; silver2017predictron, we choose to build upon Dyna-style approaches that optimize for a policy sutton1991dyna; sutton2012dyna; yao2009multi; kaiser2019modelbased; ha2018world; holland2018effect; luo2018algorithmic, specifically MBPO janner2019trust. Uncertainty quantification, a key ingredient to our approach, is critical to good performance in model-based RL both theoretically (strehl2008analysis; zanettetighter; luo2018algorithmic) and empirically deisenroth2011pilco; chua2018deep; nagabandi2019deep; kurutach2018model; clavera2018model, in optimal control stengel1994optimal; banaszuk2011scalable; kim2013wiener. However, unlike these works, we develop and leverage proper uncertainty estimates that particularly suits the offline setting.
Concurrent work by kidambi2020morel also develops an offline model-based RL algorithm, MOReL. Unlike MOReL, which constructs terminating states based on a hard threshold on uncertainty, MOPO uses a soft reward penalty to incorporate uncertainty. In principle, a potential benefit of a soft penalty is that the policy is allowed to take a few risky actions and then return to the confident area near the behavioral distribution without being terminated. Moreover, while kidambi2020morel compares to model-free approaches, we make the further observation that even a vanilla model-based RL method outperforms model-free ones in the offline setting, opening interesting questions for future investigation. Finally, we evaluate our approach on both standard benchmarks fu2020d4rl and domains that require out-of-distribution generalization, achieving positive results in both.
We consider the standard Markov decision process (MDP), where and denote the state space and action space respectively, the transition dynamics, the reward function, the initial state distribution, and the discount factor. The goal in RL is to optimize a policy that maximizes the expected discounted return . The value function gives the expected discounted return under when starting from state .
In the offline RL problem, the algorithm only has access to a static dataset collected by one or a mixture of behavior policies , and cannot interact further with the environment. We refer to the distribution from which was sampled as the behavioral distribution.
We also introduce the following notation for the derivation in Section 4. In the model-based approach we will have a dynamics model estimated from the transitions in . This estimated dynamics defines a model MDP . Let
denote the probability of being in stateat time step if actions are sampled according to and transitions according to . Let be the discounted state distribution of policy under dynamics : . We also define (abusing notation) the discounted state-action distribution . Note that .
We now summarize model-based policy optimization (MBPO) janner2019trust, which we build on in this work. MBPO learns a model of the transition distribution parametrized by , via supervised learning on the behavorial data . MBPO also learns a model of the reward function in the same manner. During training, MBPO performs -step rollouts using starting from state , adds the generated data to a separate replay buffer , and finally updates the policy using data sampled from . When applied in an online setting, MBPO iteratively collects samples from the environment and uses them to further improve both the model and the policy. We omit this step in the offline setting considered in this paper. In our experiments in Section 5.3 and Table 1, we observe that MBPO performs surprisingly well on the offline RL problem compared to model-free methods. In the next section, we derive MOPO, which builds upon MBPO to further improve performance.
Unlike model-free methods, our goal is to design an offline model-based reinforcement learning algorithm that can take actions that are not strictly within the support of the behavioral distribution. Using a model gives us the potential to do so. However, models will become increasingly inaccurate further from the behavioral distribution, and vanilla model-based policy optimization algorithms may exploit these regions where the model is inaccurate. This concern is especially important in the offline setting, where mistakes in the dynamics will not be corrected with additional data collection.
For the algorithm to perform reliably, it’s crucial to balance the return and risk: 1. the potential gain in performance by escaping the behavioral distribution and finding a better policy, and 2. the risk of overfitting to the errors of the dynamics at regions far away from the behavioral distribution. To achieve the optimal balance, we first bound the return from below by the return of a constructed model MDP penalized by the uncertainty of the dynamics (Section 4.1). Then we maximize the conservative estimation of the return by an off-the-shelf reinforcement learning algorithm, which gives MOPO, a generic model-based off-policy algorithm (Section 4.2). We discuss important practical implementation details in Section 4.3.
Our key idea is to build a lower bound for the return of a policy under the true dynamics (i.e. ) and then maximize the lower bound over . A natural estimator for the true return is , the return under the estimated dynamics. The error of this estimator depends on, potentially in a complex fashion, the error of , which may compound over time. In this subsection, we characterize how the error of influences the uncertainty of the total return. We begin by stating a lemma (adapted from luo2018algorithmic) that gives a precise relationship between the performance of a policy under dynamics and dynamics . (All proofs are given in Appendix B.)
Let and be two MDPs with the same reward function , but different dynamics and respectively. Let and . Then,
In other words,
Here and throughout the paper, we view as the real dynamics and as the learned dynamics. We observe that the quantity plays a key role linking the estimation error of the dynamics and the estimation error of the return. By definition, we have that measures the difference between and under the test function — indeed, if , then . By equation (1), it governs the differences between the performances of in the two MDPs. If we could estimate or bound it from above, then we could use the RHS of (1) as an upper bound for the estimation error of . Moreover, equation (2) suggests that a policy that obtains high reward in the estimated MDP while also minimizing will obtain high reward in the real MDP.
However, computing remains elusive because it depends on the unknown function . Leveraging properties of , we will replace by an upper bound that depends solely on the error of the dynamics . We first note that if is a set of functions mapping to that contains , then,
where is the integral probability metric (IPM) muller1997integral defined by . IPMs are quite general and contain several other distance measures as special cases sriperumbudur2009integral. Depending on what we are willing to assume about , there are multiple options to bound by some notion of error of , discussed in greater detail in Appendix A:
(i) If , then is the total variation distance. Thus, if we assume that the reward function is bounded such that , we have , and hence
(ii) If is the set of 1-Lipschitz function w.r.t. to some distance metric, then is the 1-Wasserstein distance w.r.t. the same metric. Thus, if we assume that is -Lipschitz with respect to a norm , it follows that
Note that when and are both deterministic, then (here denotes the deterministic output of the model ).
Approach (ii) has the advantage that it incorporates the geometry of the state space, but at the cost of an additional assumption which is generally impossible to verify in our setting. The assumption in (i), on the other hand, is extremely mild and typically holds in practice. Therefore we will prefer (i) unless we have some prior knowledge about the MDP. We summarize the assumptions and the inequalities in the options above as follows.
Assume a scalar and a function class such that for all .
Concretely, option (i) above corresponds to and , and option (ii) corresponds to and
. We will analyze our framework under the assumption that we have access to an oracle uncertainty quantification module that provides an upper bound on the error of the model. In our implementation, we will estimate the error of the dynamics by heuristics (see sections4.3 and 5.3).
Given an admissible error estimator, we define the uncertainty-penalized reward where , and the uncertainty-penalized MDP . We observe that is conservative in that the return under it bounds from below the true return:
Theoretical Guarantees for MOPO. We will theoretical analyze the algorithm by establishing the optimality of the learned policy among a family of policies. Let be the optimal policy on and be the policy that generates the batch data. Define as
For notational simplicity, we will omit the dependency on and write it as . We observe that characterizes how erroneous the model is along trajectories induced by . For example, consider the extreme case when . Because is learned on the data generated from , we expect to be relatively accurate for those , and thus tends to be small. Thus, we expect to be quite small. On the other end of the spectrum, when often visits states out of the batch data distribution in the real MDP, namely is different from , we expect that is even more different from the batch data and therefore the error estimates for those tend to be large. As a consequence, we have that will be large.
For , let be the best policy among those incurring model error at most :
The main theorem provides a performance guarantee on the policy produced by MOPO.
Interpretation: One consequence of (11) is that . This suggests that should perform at least as well as the behavior policy , because, as argued before, is expected to be small.
Equation (12) tells us that the learned policy can be as good as any policy with , or in other words, any policy that visits states with sufficiently small uncertainty as measured by . A special case of note is when , we have , which suggests that the suboptimality gap between the learned policy and the optimal policy depends on the error . The closer is to the batch data, the more likely the uncertainty will be smaller on those points . On the other hand, the smaller the uncertainty error of the dynamics is, the smaller is. In the extreme case when (perfect dynamics and uncertainty quantification), we recover the optimal policy .
Second, by varying the choice of to maximize the RHS of Equation (12), we trade off the risk and the return. As increases, the return increases also, since can be selected from a larger set of policies. However, the risk factor increases also. The optimal choice of is achieved when the risk balances the gain from exploring policies far from the behavioral distribution. The exact optimal choice of may depend on the particular problem. We note is only used in the analysis, and our algorithm automatically achieves the optimal balance because Equation (12) holds for any .
Now we describe a practical implementation of MOPO motivated by the analysis above. Our practical method is summarized in Algorithm 2, and largely follows MBPO with a few key exceptions.
Following MBPO, we model the dynamics using a neural network that outputs a Gaussian distribution over the next state. The mean and covariance matrix are parameterized byand respectively: . We learn an ensemble of dynamics models , with each model trained independently via maximum likelihood.
Uncertainty quantification. A perfect admissible error estimator is not available in practice, and therefore we consider using heuristical surrogates, which turn out to be sufficiently accurate and effective for our problem.222Designing prediction confidence intervals with strong theoretical guarantees is challenging and beyond the scope of this work, which focuses on using uncertainty quantification properly in offline RL. In most of our experiments, we use
, the maximum standard deviation of the learned models in the ensemble, as the estimator for the error of the prediction model and the reward penalty. Indeed, the learned variance of a Gaussian probabilistic model can theoretically recover the true aleatoric uncertainty when the model is well-specified. Empirically the learned variance often captures both aleatoric and epistemic uncertainty, even for learning deterministic functions (where only epistemic uncertainty exists). We use the maximum of the learned variance in the ensemble to be more conservative and robust.
Lipschitz regularization. An important ingredient of our algorithm is to learn the model with a Lipschitz normalization on the mean function . Following the work of miyato2018spectral, we use spectral normalization to regularize the weights of the mean network . Every weight matrix in the mean network is normalized as where
denotes the largest singular value of. Our motivation to apply Lipschitz normalization is twofold. First, as shown in the standard settings miyato2018spectral; wei2019data; wei2019improved, Lipschitz regularization may improve the in-distribution and out-of-distribution generalization performance of the learned dynamics. Second, when the true dynamics are not sufficiently Lipschitz in some region, the Lipschitz regularization will incur larger prediction error and uncertainty estimates. Thus, non-Lipschitz regions will be heavily penalized and the policy will be encouraged to avoid them. Recall that Lipschitz assumptions on the model and value function are assumed in the theory, which suggests that avoiding non-Lipschitz regions may be generally beneficial. We show in Section 5.3 that Lipschitz regularization is helpful for various settings; hence, we suspect that there may be other reasons that cause the gains beyond the motivations above.
We treat the penalty coefficient
as a user-chosen hyperparameter. Since we do not have a true admissible error estimator, the penalty coefficientprescribed by the theory may not be an optimal choice. The penalty should be larger if our heuristic underestimates the true error, and smaller if we substantially overestimate the true error.
In our experiments, we aim to study the follow questions: (1) How does MOPO perform on standard offline RL benchmarks in comparison to prior state-of-the-art approaches? (2) Can MOPO solve tasks that require generalization to out-of-distribution behaviors? (3) How does each component in MOPO affect performance?
Question (2) is particularly relevant for scenarios in which we have logged interactions with the environment but want to use those data to optimize a policy for a different reward function. To study (2) and challenge methods further, we construct two additional continuous control tasks that demand out-of-distribution generalization, as described in Section 5.2. For more details on the experimental set-up and hyperparameters, see Appendix C.
We compare against several baselines, including the current state-of-the-art model-free offline RL algorithms. Bootstrapping error accumulation reduction (BEAR) aims to constrain the policy’s actions to lie in the support of the behavioral distribution kumar2019stabilizing. This is implemented as a constraint on the average MMD gretton2007kernel between and a generative model that approximates . Behavior-regularized actor critic (BRAC) is a family of algorithms that operate by penalizing the value function by some measure of discrepancy (KL divergence or MMD) between and wu2019behavior. BRAC-v uses this penalty both when updating the critic and when updating the actor, while BRAC-p uses this penalty only when updating the actor and does not explicitly penalize the critic.
To answer question (1), we evaluate our method on a large subset of datasets in the D4RL benchmark333https://sites.google.com/view/d4rl fu2020d4rl, including three environments (halfcheetah, hopper, and walker2d) and four dataset types (random, medium, mixed, medium-expert), yielding a total of 12 problem settings. The datasets in this benchmark have been generated as follows: random: roll out a randomly initialized policy for 1M steps. medium: partially train a policy using SAC, then roll it out for 1M steps. mixed: train a policy using SAC until a certain (environment-specific) performance threshold is reached, and take the replay buffer as the batch. medium-expert: combine 1M samples of rollouts from a fully-trained policy with another 1M samples of rollouts from a partially trained policy or a random policy.
Results are given in Table 1. Our method is the strongest by a significant margin on all the mixed datasets and most of the medium-expert datasets, while also achieving the best performance on all of the random datasets. Our model-based approach performs less well on the medium datasets. We hypothesize that the lack of action diversity in the medium datasets make it more difficult to learn a model that generalizes well. Fortunately, this setting is one in which model-free methods can perform well, suggesting that model-based and model-free approaches are able to perform well in complementary settings.
To answer question (2), we construct two environments halfcheetah-jump and ant-angle where the agent must solve a task that is different from the purpose of the behavioral policy. The trajectories of the batch data in the these datasets are from policies trained for the original dynamics and reward functions HalfCheetah and Ant in OpenAI Gym brockman2016openai which incentivize the cheetach and ant to move forward as fast as possible. Note that for HalfCheetah, we set the maximum velocity to be . Concretely, we train SAC for 1M steps and use the entire training replay buffer as the trajectories for the batch data. Then, we assign the these trajectories with new rewards that incentivize the cheetach to jump and the ant to run towards the top right corner with a 30 degree angle. Thus, to achieve good performance for the new reward functions, the policy need to leave the observational distribution, as visualized in Figure 2. We include the exact forms of the new reward functions in Appendix C. In these environments, learning the correct behaviors requires leaving the support of the data distribution; optimizing solely within the data manifold will lead to sub-optimal policies.
In Table 2, we show that MOPO significantly outperforms the state-of-the-art model-free approaches. In particular, model-free offline RL cannot outperform the best trajectory in the batch dataset, whereas MOPO exceeds the batch max by a significant margin. This validates that MOPO is able to generalize to out-of-distribution behaviors while existing model-free methods are unable to solve those challenges. Note that vanilla MBPO performs much better than SAC in the two environments, consolidating our claim that vanilla model-based methods can attain better results than model-free methods in the offline setting, especially where generalization to out-of-distribution is needed. The visualization in Figure 2 suggests indeed the policy learned MOPO can effectively solve the tasks by reaching to states unseen in the batch data.
To answer question (3), we conduct a thorough ablation study on MOPO. The main goal of the ablation study is to understand how Lipschitz regularization and the choice of reward penalty affects performance. We denote no Lip as a method without using Lipschitz regularization, no ens. as a method without model ensembles, ens. penalty as a method that uses model ensemble disagreement as the reward penalty, no penalty as a method without reward penalty, and oracle uncertainty as a method using the true model prediction error as the reward penalty. Note that we include oracle uncertainty to indicate the upper bound of our approach.
The results of our study are shown in Table 3. Lipschitz regularization generally boosts the performance on both the D4RL environment and the out-of-distribution environment. Note that one exception is that in the halfcheetah-jump results, MOPO, no penalty is performing poorly, which suggests that reward penalty is important for out-of-distribution generalization. For different reward penalty types, reward penalties based on learned variance outperform those based on ensemble disagreement in 3 out of 4 settings. Both reward penalties achieve significantly better performances than no reward penalty, indicating that it is imperative to consider model uncertainty in batch model-based RL. Methods that uses oracle uncertainty obtain slightly better performance than most of our methods. Note that MOPO even attains the best results on halfcheetah-jump. Such results suggest that our uncertainty quantification on states is empirically successful, since there is only a small gap. We believe future work on improving uncertainty estimation may be able to bridge this gap further.
In general, we find that performance differences are much larger for halfcheetah-jump than the D4RL halfcheetah-mixed dataset, likely because halfcheetah-jump requires greater generalization and hence places more demands on the accuracy of the model and uncertainty estimate.
|MOPO, no Lip||6393.3||3912.3||learned var||No|
|MOPO, ens. penalty||6405.2||3763.2||ensemble||Yes|
|MOPO, no Lip, ens. penalty||6502.4||3239.4||ensemble||No|
|MOPO, no penalty||6409.1||-980.8||no penalty||Yes|
|MBPO, no ens.||2247.2||-68.7||no penalty||No|
|MOPO, oracle uncertainty||7092.1||3948.8||oracle||Yes|
|MOPO, no Lip, oracle uncertainty||6837.3||3917.6||oracle||No|
In this paper, we studied model-based offline RL algorithms. We started with the observation that, in the offline setting, existing model-based methods significantly outperform vanilla model-free methods, suggesting that model-based methods are more resilient to the overestimation and overfitting issues that plague off-policy model-free RL algorithms. This phenomenon implies that model-based RL has the ability to generalize to states outside of the data support and such generalization is conducive for offline RL. However, online and offline algorithms must act differently when handling out-of-distribution states. Model error on out-of-distribution states that often drives exploration and corrective feedback in the online setting kumar2020discor can be detrimental when interaction is not allowed. Using theoretical principles, we develop an algorithm, model-based offline policy optimization (MOPO), which maximizes the policy on a MDP that penalizes states with high model uncertainty. MOPO trades off the risk of making mistakes and the benefit of diverse exploration from escaping the behavioral distribution. In our experiments, MOPO outperforms state-of-the-art offline RL methods in both standard benchmarks fu2020d4rl and out-of-distribution generalization environments.
Our work opens up a number of questions and directions for future work. First, an interesting avenue for future research to incorporate the policy regularization ideas of BEAR and BRAC into the reward penalty framework to improve the performance of MOPO on narrow data distributions (such as the “medium” datasets in D4RL). Second, it’s an interesting theoretical question to understand why model-based methods appear to be much better suited to the batch setting than model-free methods. Multiple potential factors include a greater supervision from the states (instead of only the reward), more stable and less noisy supervised gradient updates, or ease of uncertainty estimation. Our work suggests that uncertainty estimation plays an important role, particularly in settings that demand generalization. However, uncertainty estimation does not explain the entire difference nor does it explain why model-free methods cannot also enjoy the benefits of uncertainty estimation. For those domains where learning a model may be very difficult due to complex dynamics, developing better model-free offline RL methods may be desirable or imperative. Hence, it is crucial to conduct future research on investigating how to bring model-free offline RL methods up to the level of the performance of model-based methods, which would require further understanding where the generalization benefits come from.
We thank Michael Janner for help with MBPO and Aviral Kumar for setting up BEAR and D4RL. The work is also partially supported by SDSI and SAIL at Stanford.
Let be a measurable space. The integral probability metric associated with a class of (measurable) real-valued functions on is defined as
where and are probability measures on . We note the following special cases:
If , then is the total variation distance
If is the set of 1-Lipschitz function w.r.t. to some cost function (metric) on , then is the 1-Wasserstein distance w.r.t. the same metric:
where denotes the set of all couplings of and
, i.e. joint distributions onwhich have marginals and .
If where is a reproducing kernel Hilbert space with kernel , then is the maximum mean discrepancy:
where and .
We provide a proof for Lemma 4.1 for completeness. The proof is essentially the same as that for [luo2018algorithmic, Lemma 4.3].
Let be the expected return when executing on for the first steps, then switching to for the remainder. That is,
Note that and , so
where is the expected return of the first time steps, which are taken with respect to . Then
as claimed. ∎
Now we prove Theorem 4.2.
For halfcheetah-jump, the reward function that we use to train the behavioral policy is where denotes the velocity along the x-axis. After collecting the offline dataset, we relabel the reward function to where denotes the z-position of the half-cheetah and init z denotes the initial z-position.
For ant-angle, the reward function that we use to train the behavioral policy is . After collecting the offline dataset, we relabel the reward function to where , denote the velocity along the -axis respectively.
For both out-of-distribution environments, instead of sampling actions from the learned policy during the model rollout (line 10 in Algorithm 2), we sample random actions from , which achieves better performance empirically. One potential reason is that using random actions during model rollouts leads to better exploration of the OOD states.
Here we list the hyperparameters used in the experiments.
For the D4RL datasets, the rollout length and penalty coefficient are given in Table 4. We search over and report the best final performance, averaged over 3 seeds. The only exceptions are halfcheetah-random and walker2d-medium-expert, where other penalty coefficients were found to work better.
For the out-of-generalization tasks, we use rollout length for halfcheetah-jump and for ant-angle, and penalty coefficient for halfcheetah-jump and for ant-angle.
When sampling from , we use a rollout batch size of
Across all domains, we train an ensemble of models and pick the best models based on their prediction error on a hold-out set of transitions in the offline dataset. Each of the model in the ensemble is parametrized as a 4-layer feedforward neural network with hidden units and after the last hidden layer, the model outputs the mean and variance using a two-head architecture. Spectral normalization miyato2018spectral is applied to all layers except the head that outputs the model variance.
For the SAC updates, we sample a batch of transitions, of them from and the rest of them from .