Pessimistic Model-based Offline RL: PAC Bounds and Posterior Sampling under Partial Coverage

07/13/2021
by   Masatoshi Uehara, et al.
7

We study model-based offline Reinforcement Learning with general function approximation. We present an algorithm named Constrained Pessimistic Policy Optimization (CPPO) which leverages a general function class and uses a constraint to encode pessimism. Under the assumption that the ground truth model belongs to our function class, CPPO can learn with the offline data only providing partial coverage, i.e., it can learn a policy that completes against any policy that is covered by the offline data, in polynomial sample complexity with respect to the statistical complexity of the function class. We then demonstrate that this algorithmic framework can be applied to many specialized Markov Decision Processes where the additional structural assumptions can further refine the concept of partial coverage. One notable example is low-rank MDP with representation learning where the partial coverage is defined using the concept of relative condition number measured by the underlying unknown ground truth feature representation. Finally, we introduce and study the Bayesian setting in offline RL. The key benefit of Bayesian offline RL is that algorithmically, we do not need to explicitly construct pessimism or reward penalty which could be hard beyond models with linear structures. We present a posterior sampling-based incremental policy optimization algorithm (PS-PO) which proceeds by iteratively sampling a model from the posterior distribution and performing one-step incremental policy optimization inside the sampled model. Theoretically, in expectation with respect to the prior distribution, PS-PO can learn a near optimal policy under partial coverage with polynomial sample complexity.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/09/2021

Representation Learning for Online and Offline RL in Low-rank MDPs

This work studies the question of Representation Learning in RL: how can...
11/21/2021

Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation

We consider the offline reinforcement learning problem, where the aim is...
10/17/2021

Towards Instance-Optimal Offline Reinforcement Learning with Pessimism

We study the offline reinforcement learning (offline RL) problem, where ...
10/12/2020

Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning?

It is believed that a model-based approach for reinforcement learning (R...
03/24/2021

Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation

Policy optimization methods are popular reinforcement learning algorithm...
05/09/2021

Non-asymptotic Performances of Robust Markov Decision Processes

In this paper, we study the non-asymptotic performance of optimal policy...
09/15/2021

DROMO: Distributionally Robust Offline Model-based Policy Optimization

We consider the problem of offline reinforcement learning with model-bas...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.