Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation

03/24/2021
by   Andrea Zanette, et al.
0

Policy optimization methods are popular reinforcement learning algorithms, because their incremental and on-policy nature makes them more stable than the value-based counterparts. However, the same properties also make them slow to converge and sample inefficient, as the on-policy requirement precludes data reuse and the incremental updates couple large iteration complexity into the sample complexity. These characteristics have been observed in experiments as well as in theory in the recent work of <cit.>, which provides a policy optimization method PCPG that can robustly find near optimal polices for approximately linear Markov decision processes but suffers from an extremely poor sample complexity compared with value-based techniques. In this paper, we propose a new algorithm, COPOE, that overcomes the sample complexity issue of PCPG while retaining its robustness to model misspecification. Compared with PCPG, COPOE makes several important algorithmic enhancements, such as enabling data reuse, and uses more refined analysis techniques, which we expect to be more broadly applicable to designing new reinforcement learning algorithms. The result is an improvement in sample complexity from O(1/ϵ^11) for PCPG to O(1/ϵ^3) for PCPG, nearly bridging the gap with value-based techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2018

AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Reinforcement Learning with Near-Optimal Sample Complexity

In this paper, we propose AsyncQVI: Asynchronous-Parallel Q-value Iterat...
research
02/28/2022

Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity

Offline or batch reinforcement learning seeks to learn a near-optimal po...
research
02/22/2023

Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted Markov Decision Processes

The classical algorithms used in tabular reinforcement learning (Value I...
research
05/30/2022

Data Banzhaf: A Data Valuation Framework with Maximal Robustness to Learning Stochasticity

This paper studies the robustness of data valuation to noisy model perfo...
research
11/10/2020

Sample Complexity Bounds for Two Timescale Value-based Reinforcement Learning Algorithms

Two timescale stochastic approximation (SA) has been widely used in valu...
research
02/18/2020

Empirical Policy Evaluation with Supergraphs

We devise and analyze algorithms for the empirical policy evaluation pro...
research
03/07/2021

The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning

Some reinforcement learning methods suffer from high sample complexity c...

Please sign up or login with your details

Forgot password? Click here to reset