Provably Efficient Offline Reinforcement Learning with Trajectory-Wise Reward

06/13/2022
∙
by   Tengyu Xu, et al.
∙
0
∙

The remarkable success of reinforcement learning (RL) heavily relies on observing the reward of every visited state-action pair. In many real world applications, however, an agent can observe only a score that represents the quality of the whole trajectory, which is referred to as the trajectory-wise reward. In such a situation, it is difficult for standard RL methods to well utilize trajectory-wise reward, and large bias and variance errors can be incurred in policy evaluation. In this work, we propose a novel offline RL algorithm, called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED), which decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value iteration based on the learned proxy reward. To ensure the value functions constructed by PARTED are always pessimistic with respect to the optimal ones, we design a new penalty term to offset the uncertainty of the proxy reward. For general episodic MDPs with large state space, we show that PARTED with overparameterized neural network function approximation achieves an 𝒊Ėƒ(D_effH^2/√(N)) suboptimality, where H is the length of episode, N is the total number of samples, and D_eff is the effective dimension of the neural tangent kernel matrix. To further illustrate the result, we show that PARTED achieves an 𝒊Ėƒ(dH^3/√(N)) suboptimality with linear MDPs, where d is the feature dimension, which matches with that with neural network function approximation, when D_eff=dH. To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 02/24/2023

Provably Efficient Neural Offline Reinforcement Learning via Perturbed Rewards

We propose a novel offline reinforcement learning (RL) algorithm, namely...
research
∙ 05/24/2023

Provable Offline Reinforcement Learning with Human Feedback

In this paper, we investigate the problem of offline reinforcement learn...
research
∙ 10/19/2021

On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game

To achieve sample efficiency in reinforcement learning (RL), it necessit...
research
∙ 05/27/2019

Disentangling Dynamics and Returns: Value Function Decomposition with Future Prediction

Value functions are crucial for model-free Reinforcement Learning (RL) t...
research
∙ 02/16/2022

Branching Reinforcement Learning

In this paper, we propose a novel Branching Reinforcement Learning (Bran...
research
∙ 08/13/2020

Reinforcement Learning with Trajectory Feedback

The computational model of reinforcement learning is based upon the abil...
research
∙ 05/30/2022

Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning

We generalise the problem of reward modelling (RM) for reinforcement lea...

Please sign up or login with your details

Forgot password? Click here to reset