On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

11/23/2022
∙
by   Thanh Nguyen-Tang, et al.
∙
0
∙

Sample-efficient offline reinforcement learning (RL) with linear function approximation has recently been studied extensively. Much of prior work has yielded the minimax-optimal bound of 𝒊Ėƒ(1/√(K)), with K being the number of episodes in the offline data. In this work, we seek to understand instance-dependent bounds for offline RL with function approximation. We present an algorithm called Bootstrapped and Constrained Pessimistic Value Iteration (BCP-VI), which leverages data bootstrapping and constrained optimization on top of pessimism. We show that under a partial data coverage assumption, that of concentrability with respect to an optimal policy, the proposed algorithm yields a fast rate of 𝒊Ėƒ(1/K) for offline RL when there is a positive gap in the optimal Q-value functions, even when the offline data were adaptively collected. Moreover, when the linear features of the optimal actions in the states reachable by an optimal policy span those reachable by the behavior policy and the optimal actions are unique, offline RL achieves absolute zero sub-optimality error when K exceeds a (finite) instance-dependent threshold. To the best of our knowledge, these are the first 𝒊Ėƒ(1/K) bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage. We also provide instance-agnostic and instance-dependent information-theoretical lower bounds to complement our upper bounds.

READ FULL TEXT
research
∙ 06/01/2022

On Gap-dependent Bounds for Offline Reinforcement Learning

This paper presents a systematic study on gap-dependent sample complexit...
research
∙ 10/17/2021

Towards Instance-Optimal Offline Reinforcement Learning with Pessimism

We study the offline reinforcement learning (offline RL) problem, where ...
research
∙ 07/25/2023

The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation

Theoretical guarantees in reinforcement learning (RL) are known to suffe...
research
∙ 02/05/2023

Refined Value-Based Offline RL under Realizability and Partial Coverage

In offline reinforcement learning (RL) we have no opportunity to explore...
research
∙ 06/24/2023

Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data

Developing theoretical guarantees on the sample complexity of offline RL...
research
∙ 02/06/2022

Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning

In reinforcement learning (RL), offline learning decoupled learning from...
research
∙ 07/06/2022

Instance-Dependent Near-Optimal Policy Identification in Linear MDPs via Online Experiment Design

While much progress has been made in understanding the minimax sample co...

Please sign up or login with your details

Forgot password? Click here to reset