Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism

03/22/2021
by   Paria Rashidinejad, et al.
41

Offline (or batch) reinforcement learning (RL) algorithms seek to learn an optimal policy from a fixed dataset without active data collection. Based on the composition of the offline dataset, two main categories of methods are used: imitation learning which is suitable for expert datasets and vanilla offline RL which often requires uniform coverage datasets. From a practical standpoint, datasets often deviate from these two extremes and the exact data composition is usually unknown a priori. To bridge this gap, we present a new offline RL framework that smoothly interpolates between the two extremes of data composition, hence unifying imitation learning and vanilla offline RL. The new framework is centered around a weak version of the concentrability coefficient that measures the deviation from the behavior policy to the expert policy alone. Under this new framework, we further investigate the question on algorithm design: can one develop an algorithm that achieves a minimax optimal rate and also adapts to unknown data composition? To address this question, we consider a lower confidence bound (LCB) algorithm developed based on pessimism in the face of uncertainty in offline RL. We study finite-sample properties of LCB as well as information-theoretic limits in multi-armed bandits, contextual bandits, and Markov decision processes (MDPs). Our analysis reveals surprising facts about optimality rates. In particular, in all three settings, LCB achieves a faster rate of 1/N for nearly-expert datasets compared to the usual rate of 1/√(N) in offline RL, where N is the number of samples in the batch dataset. In the case of contextual bandits with at least two contexts, we prove that LCB is adaptively optimal for the entire data composition range, achieving a smooth transition from imitation learning to offline RL. We further show that LCB is almost adaptively optimal in MDPs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2023

Bridging Imitation and Online Reinforcement Learning: An Optimistic Tale

In this paper, we address the following problem: Given an offline demons...
research
02/06/2023

A Strong Baseline for Batch Imitation Learning

Imitation of expert behaviour is a highly desirable and safe approach to...
research
03/30/2023

MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations

We study a new paradigm for sequential decision making, called offline P...
research
12/30/2020

Is Pessimism Provably Efficient for Offline RL?

We study offline reinforcement learning (RL), which aims to learn an opt...
research
03/17/2022

Semi-Markov Offline Reinforcement Learning for Healthcare

Reinforcement learning (RL) tasks are typically framed as Markov Decisio...
research
06/23/2023

CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning

Offline reinforcement learning (RL) aims to learn an optimal policy from...
research
05/22/2023

Offline Reinforcement Learning with Additional Covering Distributions

We study learning optimal policies from a logged dataset, i.e., offline ...

Please sign up or login with your details

Forgot password? Click here to reset