Interpretable performance analysis towards offline reinforcement learning: A dataset perspective

by   Chenyang Xi, et al.

Offline reinforcement learning (RL) has increasingly become the focus of the artificial intelligent research due to its wide real-world applications where the collection of data may be difficult, time-consuming, or costly. In this paper, we first propose a two-fold taxonomy for existing offline RL algorithms from the perspective of exploration and exploitation tendency. Secondly, we derive the explicit expression of the upper bound of extrapolation error and explore the correlation between the performance of different types of algorithms and the distribution of actions under states. Specifically, we relax the strict assumption on the sufficiently large amount of state-action tuples. Accordingly, we provably explain why batch constrained Q-learning (BCQ) performs better than other existing techniques. Thirdly, after identifying the weakness of BCQ on dataset of low mean episode returns, we propose a modified variant based on top return selection mechanism, which is proved to be able to gain state-of-the-art performance on various datasets. Lastly, we create a benchmark platform on the Atari domain, entitled RL easy go (RLEG), at an estimated cost of more than 0.3 million dollars. We make it open-source for fair and comprehensive competitions between offline RL algorithms with complete datasets and checkpoints being provided.


page 15

page 16

page 17

page 18

page 19

page 20

page 25

page 26


Understanding the Effects of Dataset Characteristics on Offline Reinforcement Learning

In real world, affecting the environment by a weak policy can be expensi...

Critic Regularized Regression

Offline reinforcement learning (RL), also known as batch RL, offers the ...

A Minimalist Approach to Offline Reinforcement Learning

Offline reinforcement learning (RL) defines the task of learning from a ...

The Least Restriction for Offline Reinforcement Learning

Many practical applications of reinforcement learning (RL) constrain the...

RL Unplugged: Benchmarks for Offline Reinforcement Learning

Offline methods for reinforcement learning have the potential to help br...

Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning

Recently developed offline reinforcement learning algorithms have made i...

Identifying Critical States by the Action-Based Variance of Expected Return

The balance of exploration and exploitation plays a crucial role in acce...