Understanding Deep Neural Function Approximation in Reinforcement Learning via ε-Greedy Exploration

09/15/2022
by   Fanghui Liu, et al.
0

This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL) with the ϵ-greedy exploration under the online setting. This problem setting is motivated by the successful deep Q-networks (DQN) framework that falls in this regime. In this work, we provide an initial attempt on theoretical understanding deep RL from the perspective of function class and neural networks architectures (e.g., width and depth) beyond the "linear" regime. To be specific, we focus on the value based algorithm with the ϵ-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces, respectively, which aims at approximating an α-smooth Q-function in a d-dimensional feature space. We prove that, with T episodes, scaling the width m = 𝒪(T^d/2α + d) and the depth L=𝒪(log T) of the neural network for deep RL is sufficient for learning with sublinear regret in Besov spaces. Moreover, for a two layer neural network endowed by the Barron space, scaling the width Ω(√(T)) is sufficient. To achieve this, the key issue in our analysis is how to estimate the temporal difference error under deep neural function approximation as the ϵ-greedy exploration is not enough to ensure "optimism". Our analysis reformulates the temporal difference error in an L^2(dμ)-integrable space over a certain averaged measure μ, and transforms it to a generalization problem under the non-iid setting. This might have its own interest in RL theory for better understanding ϵ-greedy exploration in deep RL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2020

On Function Approximation in Reinforcement Learning: Optimism in the Face of Large State Spaces

The classical theory of reinforcement learning (RL) has focused on tabul...
research
05/29/2018

Depth and nonlinearity induce implicit exploration for RL

The question of how to explore, i.e., take actions with uncertain outcom...
research
11/16/2020

No-Regret Reinforcement Learning with Value Function Approximation: a Kernel Embedding Approach

We consider the regret minimisation problem in reinforcement learning (R...
research
07/14/2021

Going Beyond Linear RL: Sample Efficient Neural Function Approximation

Deep Reinforcement Learning (RL) powered by neural net approximation of ...
research
05/16/2023

Coagent Networks: Generalized and Scaled

Coagent networks for reinforcement learning (RL) [Thomas and Barto, 2011...
research
06/01/2018

The Nonlinearity Coefficient - Predicting Overfitting in Deep Neural Networks

For a long time, designing neural architectures that exhibit high perfor...
research
06/08/2020

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Temporal-difference and Q-learning play a key role in deep reinforcement...

Please sign up or login with your details

Forgot password? Click here to reset