MOPO: Model-based Offline Policy Optimization

05/27/2020
by   Tianhe Yu, et al.
15

Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a batch of previously collected data. This problem setting is compelling, because it offers the promise of utilizing large, diverse, previously collected datasets to acquire policies without any costly or dangerous active exploration, but it is also exceptionally difficult, due to the distributional shift between the offline training data and the learned policy. While there has been significant progress in model-free offline RL, the most successful prior methods constrain the policy to the support of the data, precluding generalization to new states. In this paper, we observe that an existing model-based RL algorithm on its own already produces significant gains in the offline setting, as compared to model-free approaches, despite not being designed for this setting. However, although many standard model-based RL methods already estimate the uncertainty of their model, they do not by themselves provide a mechanism to avoid the issues associated with distributional shift in the offline setting. We therefore propose to modify existing model-based RL methods to address these issues by casting offline model-based RL into a penalized MDP framework. We theoretically show that, by using this penalized MDP, we are maximizing a lower bound of the return in the true MDP. Based on our theoretical results, we propose a new model-based offline RL algorithm that applies the variance of a Lipschitz-regularized model as a penalty to the reward function. We find that this algorithm outperforms both standard model-based RL methods and existing state-of-the-art model-free offline RL approaches on existing offline RL benchmarks, as well as two challenging continuous control tasks that require generalizing from data collected for a different task.

READ FULL TEXT
research
05/12/2020

MOReL : Model-Based Offline Reinforcement Learning

In offline reinforcement learning (RL), the goal is to learn a successfu...
research
09/05/2023

Model-based Offline Policy Optimization with Adversarial Network

Model-based offline reinforcement learning (RL), which builds a supervis...
research
10/08/2021

Revisiting Design Choices in Model-Based Offline Reinforcement Learning

Offline reinforcement learning enables agents to leverage large pre-coll...
research
11/27/2022

Domain Generalization for Robust Model-Based Offline Reinforcement Learning

Existing offline reinforcement learning (RL) algorithms typically assume...
research
03/13/2022

DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning

Offline reinforcement learning algorithms promise to be applicable in se...
research
01/07/2022

Offline Reinforcement Learning for Road Traffic Control

Traffic signal control is an important problem in urban mobility with a ...
research
02/09/2023

CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning

This work aims to tackle a major challenge in offline Inverse Reinforcem...

Please sign up or login with your details

Forgot password? Click here to reset