Offline Meta-Reinforcement Learning with Online Self-Supervision

07/08/2021
by   Vitchyr H. Pong, et al.
0

Meta-reinforcement learning (RL) can meta-train policies that adapt to new tasks with orders of magnitude less data than standard RL, but meta-training itself is costly and time-consuming. If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time. Although this capability would make meta-RL a practical tool for real-world use, offline meta-RL presents additional challenges beyond online meta-RL or standard offline RL settings. Meta-RL learns an exploration strategy that collects data for adapting, and also meta-trains a policy that quickly adapts to data from a new task. Since this policy was meta-trained on a fixed, offline dataset, it might behave unpredictably when adapting to data collected by the learned exploration strategy, which differs systematically from the offline data and thus induces distributional shift. We do not want to remove this distributional shift by simply adopting a conservative exploration strategy, because learning an exploration strategy enables an agent to collect better data for faster adaptation. Instead, we propose a hybrid offline meta-RL algorithm, which uses offline data with rewards to meta-train an adaptive policy, and then collects additional unsupervised online data, without any reward labels to bridge this distribution shift. By not requiring reward labels for online collection, this data can be much cheaper to collect. We compare our method to prior work on offline meta-RL on simulated robot locomotion and manipulation tasks and find that using additional unsupervised online data collection leads to a dramatic improvement in the adaptive capabilities of the meta-trained policies, matching the performance of fully online meta-RL on a range of challenging domains that require generalization to new tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2020

Offline Meta Reinforcement Learning

Consider the following problem, which we term Offline Meta Reinforcement...
research
05/31/2023

Offline Meta Reinforcement Learning with In-Distribution Online Adaptation

Recent offline meta-reinforcement learning (meta-RL) methods typically u...
research
10/08/2021

Offline Meta-Reinforcement Learning for Industrial Insertion

Reinforcement learning (RL) can in principle make it possible for robots...
research
10/06/2022

Self-Adaptive Driving in Nonstationary Environments through Conjectural Online Lookahead Adaptation

Powered by deep representation learning, reinforcement learning (RL) pro...
research
05/24/2023

Improving Language Models with Advantage-based Offline Policy Gradients

Improving language model generations according to some user-defined qual...
research
10/06/2022

Distributionally Adaptive Meta Reinforcement Learning

Meta-reinforcement learning algorithms provide a data-driven way to acqu...
research
07/10/2023

Diffusion Policies for Out-of-Distribution Generalization in Offline Reinforcement Learning

Offline Reinforcement Learning (RL) methods leverage previous experience...

Please sign up or login with your details

Forgot password? Click here to reset