Log In Sign Up

Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons

by   Chengchun Shi, et al.

We consider reinforcement learning (RL) methods in offline domains without additional online data collection, such as mobile health applications. Most of existing policy optimization algorithms in the computer science literature are developed in online settings where data are easy to collect or simulate. Their generalizations to mobile health applications with a pre-collected offline dataset remain unknown. The aim of this paper is to develop a novel advantage learning framework in order to efficiently use pre-collected data for policy optimization. The proposed method takes an optimal Q-estimator computed by any existing state-of-the-art RL algorithms as input, and outputs a new policy whose value is guaranteed to converge at a faster rate than the policy derived based on the initial Q-estimator. Extensive numerical experiments are conducted to back up our theoretical findings.


page 1

page 2

page 3

page 4


Reinforcement Learning in Possibly Nonstationary Environments

We consider reinforcement learning (RL) methods in offline nonstationary...

A Minimalist Approach to Offline Reinforcement Learning

Offline reinforcement learning (RL) defines the task of learning from a ...

Launchpad: Learning to Schedule Using Offline and Online RL Methods

Deep reinforcement learning algorithms have succeeded in several challen...

Robust On-Policy Data Collection for Data-Efficient Policy Evaluation

This paper considers how to complement offline reinforcement learning (R...

Conformal Off-Policy Prediction

Off-policy evaluation is critical in a number of applications where new ...

Offline Policy Optimization with Eligible Actions

Offline policy optimization could have a large impact on many real-world...