Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity

08/11/2022
by   Laixi Shi, et al.
0

This paper concerns the central issues of model robustness and sample efficiency in offline reinforcement learning (RL), which aims to learn to perform decision making from history data without active exploration. Due to uncertainties and variabilities of the environment, it is critical to learn a robust policy – with as few samples as possible – that performs well even when the deployed environment deviates from the nominal one used to collect the history dataset. We consider a distributionally robust formulation of offline RL, focusing on a tabular non-stationary finite-horizon robust Markov decision process with an uncertainty set specified by the Kullback-Leibler divergence. To combat with sample scarcity, a model-based algorithm that combines distributionally robust value iteration with the principle of pessimism in the face of uncertainty is proposed, by penalizing the robust value estimates with a carefully designed data-driven penalty term. Under a mild and tailored assumption of the history dataset that measures distribution shift without requiring full coverage of the state-action space, we establish the finite-sample complexity of the proposed algorithm, and further show it is almost unimprovable in light of a nearly-matching information-theoretic lower bound up to a polynomial factor of the horizon length. To the best our knowledge, this provides the first provably near-optimal robust offline RL algorithm that learns under model uncertainty and partial coverage.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2022

Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity

Offline or batch reinforcement learning seeks to learn a near-optimal po...
research
06/14/2023

Provably Efficient Offline Reinforcement Learning with Perturbed Data Sources

Existing theoretical studies on offline reinforcement learning (RL) most...
research
05/22/2023

Distributionally Robust Optimization Efficiently Solves Offline Reinforcement Learning

Offline reinforcement learning aims to find the optimal policy from a pr...
research
08/10/2022

Robust Reinforcement Learning using Offline Data

The goal of robust reinforcement learning (RL) is to learn a policy that...
research
06/01/2022

Byzantine-Robust Online and Offline Distributed Reinforcement Learning

We consider a distributed reinforcement learning setting where multiple ...
research
06/11/2022

Federated Offline Reinforcement Learning

Evidence-based or data-driven dynamic treatment regimes are essential fo...
research
03/14/2022

The Efficacy of Pessimism in Asynchronous Q-Learning

This paper is concerned with the asynchronous form of Q-learning, which ...

Please sign up or login with your details

Forgot password? Click here to reset